Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Follow publication

Google interlacing ML in all their products

--

Machine Learning and AI becoming an increasingly important part of digital products

With the GoogleIO conference coming to a close a couple of weeks ago there were many new products and improvements introduced at this years conference, including the exciting release of the new Material You design system. But there was one thing in particular that became apparent throughout the whole three-day event: the increased use and relevance of machine learning in customer facing products and services at Google.

ML and AI everywhere

It seems like Google is determined to interlace machine learning into every aspect of the company. Everybody knows that Google has already been an industry leader in the area for a while now, using it extensively in their backend services. Services like Gmail, and Google Photos in particular has used machine learning to enhance the user experience for some time now. But now these super powers have been widely opened up to the public as well.

For starters, the new Material You design system will morph its color depending on user preferences, calculated by an AI. As far as I know, this is the first time AI has been used in styling UI in the front end and it will be exciting to see exactly how far that can go.

Google has enabled ML on almost any device and platform. TensorFlow, Google’s open source ML project, has released SDKs covering any device you can think of! With TensorFlow Lite we can even run ML onboard of a micro controller, opening up for a whole world of possibilities.

In one of the keynotes they show how they use TensorFlow JS directly in the browser. The example illustrated was a messaging board, where the ML model would discard any nasty comments directly in the front-end, before even being submitted to the server.

Vision and Voice

Some of the most impressive demos through the conference were the vision and voice AI that Google has improved upon. The vision AI, used in Google Lens never fails to amaze, but even more mind blowing was the Google LaMDA presentation. You can now have a conversation with Pluto (yes, Pluto the ex-planet) where you can ask it questions and it comes up with original answers on the fly.

Not unlike the GPT-3 algorithm, LaMDA allows you to have truly original conversations with an AI

Google LaMDA seems to be comparable to OpenAIs GPT-3 algorithm that was released for limited beta a while back yielding a bunch of truly spectacular demos developed by the community.

What’s next

It will be exciting to see what applications the community will find for these new AI powered products and services. I am glad to see these changes, with Google contributing to the renaissance of ML and AI — exciting times ahead!

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Adam Wattis
Adam Wattis

Written by Adam Wattis

Javascript developer. Passionate learner.

No responses yet

Write a response