SHARE
All the cool new stuff from Google’s 2018 I/O developers conference

It’s developers conference season? Last week we got to see what Facebook had to offer, and yesterday we heard from Microsoft. Now, however, it’s time for Google to put on a show for its developers and the rest of us watching via livestream.

What happened last year?

Last year’s conference wasn’t huge on blockbuster announcements like the previous year, at which we met the Google Home. 2017 brought Android Oreo, and news about the augmented reality tech, Lens, which tells you information about places and objects you see through your camera’s phone.

Last year was also a big year for Google Assistant making moves across the electronics landscape. Google Assistant even came to the iPhone. I remember getting excited about that when it was announced, but I’ve used it roughly five times in a year.

Check out the full rundown of stuff here. Now, on toward the 2018 announcements!

Google Assistant John Legend

Google Assistant John Legend

You can use John Legend’s real voice as your Google Assistant later this year.

AI is the star of the show

Google is working on building AI centers around the world because pretty much everything the company does will involve “deep learning.” One of the big use cases is the medical evaluations. Pichai showed off an ocular scanner that can glean medical risk information from looking at a person’s eyeball without a doctor present. Another AI project analyzed thousands of data points about a patient’s health and tried to predict if they would have a medical event. it’s promising, but it’s also a little worrying.

Google Photos

Users are looking at more than 5 billion photos on Google Photos every day. Now, Google is roping in AI features directly into the feed. So, if it recognizes a person using facial recognition, it will automatically suggest that you share the photos with them. There are also some automatic photo enhancements you can enable, including one that “colorizes” black-and-white photos. Personally, the idea of AI adding false color to a monochrome image makes me extremely sad—just let it be black-and-white. The demo is impressive, though. It’s reminiscent of this tech I wrote about a few years ago.

Google Assistant

There are 500 million devices out there that are compatible with Google Assistant. The company is espousing a new “Make Google do it” catch phrase to get people asking the assistant for things.

Google Assistant is also getting new voices, including that of John Legend (which will be available later this year).

Continued Conversation is a new mode that allows you to talk to Google Assistant without saying “hey Google” every time. It will keep listening after the first request, which is a nice feature. Another feature that’s rolling out is the Multiple Actions for Google Assistant. I can finally ask it to turn off the living room and bedroom lights all in one shot. I’m irrationally excited about this.

Google Assistant Continued Conversation

Google Assistant Continued Conversation

No more saying “OK Google” over and over is a very nice addition.

Google assistant etiquette

Google is worried that kids are learning bossy behavior from their blunt interactions with Google Assistant. Pretty Please is a new feature for Google Assistant that encourages kids to speak nicely and say things like please and thank you to the Assistant. I know adults who could use this in real life.

Google assistant on smart displays

We first heard about Google’s Smart Displays back at CES, but now they’re going on-sale later this year. The first demo shows Jimmy Kimmel playing on a smart screen through YouTube TV. Then, there’s a recipe for “pizza bombs.” The recipe example makes sense, but the TV example seems wonky because the screen is so small and awkwardly placed in the home. It belongs in a TV.

Assistant visual experience

A new food pickup and delivery partnership will let you buy stuff using Google Assistant, which is going to be something we’ll likely see more and more of as time passes. It’s part of the “visual experience” for Google Assistant, which allows you to see more information as you ask it questions. So, it can show you maps, flight statuses, and suggestions.

Assistant for making reservations

One of the most impressive demos is Google Duplex, which allows you to ask Assistant to actually make phone calls for you. The demos include calling a restaurant and a hair salon. In the examples, the Assistant sounds impressively real. It recognizes context clues during the call and it even adds “ums” and “ahs” to sound more like a real person. It seems handy, but I’m very interested to see how it works in the real world.

Digital Wellbeing

Digital Wellbeing

Google will help save you from your phone.

Digital Wellbeing

Starting this week, Google is putting an emphasis on what it calls Digital Wellbeing. It’s essentially trying to help everyone manage their own screen time. The new version of Android will have a dashboard that shows you how much time you spend on your phone, how much time you spend in apps, and even how many times you visit an app during the day. I genuinely fear seeing this information about myself.

Google News

News isn’t escaping the long arm of AI. The app now gives the top five stories it thinks you need to know right now. Then, you scroll down through other articles recommended for you. Google says it specifically pulls in local stories and events. The app uses all the information it already knows about you (which is a lot) to tweak your feed.

There’s a new feature called Newscasts, which create slide shows of articles and short videos with headlines. The AI is picking out information to show you.

Stories now have a Full Coverage button that allows you to dig into one story on a deeper level. When a story has been ongoing, Google puts the developments into a timeline so you can scroll through past stories. This also seems like a way to avoid the bias accusations that Facebook has endured since the last election. By allowing people to click through to see other viewpoints, it’s easier for the company to say that it’s acting fairly.

The Newsstand section of the app will let you follow publications and put their content in their feed. You can now also directly subscribe to paid publications like the Wall Street Journal through your Google account.

The new Google News app starts rolling out today on Android and iOS, and it will be available to everyone by next week.

Android P

The next version of Google’s Mobile OS has a lot of new features, so here are short descriptions of each.

Adaptive Battery: Deep mind and Google put on-device machine learning to work to determine which apps you’re probably going to use. Then, it dedicated battery and processing power just to those apps, which it claims results in a battery increase for many users.

Adaptive Brightness: The OS learns how you like to set the brightness slider as it compares to the ambient lighting. This is an update to the previous system, which just used ambient light levels.

App Actions: Instead of simply recommending apps you may want to use, Android P will suggest specific actions. So, instead of just showing you the phone app, Android P will suggest you call your sister. It’s an extra layer of recommendations.

App Slices A new API lets developers put interactive snippets of their apps around the OS. It’s starting in search. So, if you type Lyft into the search app, a slice of the Lyft app will show up in the results and show the price. Google Photos slices can also show you photos from your vacation if you search for places you have been.

UI tweaks: Gestures now handle most of the navigation around the OS, something that people clearly like from the iPhone X. Sliding the home button sideways now scrubs through your apps and makes them easier to switch between. Android P also addresses some “pain points” in the UI. For instance, the volume slider now controls the media volume by default instead of the ringer.

Android P bedtime

Android P bedtime

Your Android P phone will shame you into going to sleep instead of staring at apps in your bed all night.

App timer: If you’re spending too much time on an app like e-mail or Hearthstone, you can now give yourself a time cap that will grey out the icon once you’ve hit the cap.

Shush: A more advanced “do not disturb” mode now silences all notifications, unless they come from your important contacts.

Wind Down mode: Android P will now let you choose a bedtime. As you approach that time, it will turn on Do Not Disturb and then fade your screen to black-and-white to encourage you to put the stupid phone down and go to sleep.

Google Maps

Following the theme of the event, Google Maps is getting even more computer vision baked in. For instance, now you can use the camera when you’re following directions to see an augmented reality view of the street. It will lay a map over the real-time view to show you which way to go and give you information about surrounding businesses. It can even put an AR cartoon fox in the scene to show you which way to go.

Visual Positioning System

Google Visual Positioning System

Augmented reality maps help you walk around without staring at a blue dot on a map that may or may not be going the right way.

Google Lens

Sticking with the Computer Vision theme, Google is also updating its Lens technology, which allows you to view the world through an augmented reality view. It can now recognize text from the real world so you can copy and paste things from recipes or the latest issue of Popular Science.

Style Match lets you find items like home decor and clothing, then search for them on the web so you can buy them.

Google also says it hopes one day Lens will one-day put live search results right over objects in the real world through Lens. So, if you point it at a concert poster, it will play a Youtube video from the artist automatically.

Waymo

With all this talk about computer vision, it makes sense to transition to self-driving cars. Google has been working on the tech since 2009, which seems like forever ago now.

Phoenix will be Waymo’s first stop for the company’s self-driving taxi service. Riders will use the Waymo app to get a ride.

The Waymo portion of the presentation is focusing heavily on the progress it has made when it comes to detecting pedestrians. This is particularly relevant right now, because Uber recently announced that its AI systems saw the pedestrian its self-driving car hit, but chose not to avoid the victim.

Google Waymo self-driving car

Google Waymo self-driving car

Waymo showed off some of its self-driving car tech, specifically designed to avoid hitting pedestrians.

Waymo says its fleet has more than 6 million miles of self-driving testing on public roads.

Google I/O continues for the next few days, so stay tuned here for more updates!