The highlights and lowlights from the Google AI event

Google Maps, Search, Translate, and more are getting an AI update.
Google SVP Prabhakar Raghavan at the AI event in Paris
Google SVP Prabhakar Raghavan at the AI event in Paris. Google / YouTube

Share

Google search turns 25 this year, and although its birthday isn’t here yet, today executives at the company announced that the search function is getting some much anticipated AI-enhanced updates. Outside of search, Google is also expanding its AI capabilities to new and improved features across its translation service, maps, and its work with arts and culture. 

After announcing on Monday that it was launching its own version of a ChatGPT-like AI chatbot called Bard, Prabhakar Raghavan, senior vice president at Google, introduced it live at a Google AI event that was streamed Wednesday from Paris, France. 

Raghavan highlighted how Google-pioneered research in transformers (that’s a neural network architecture used in language models and machine learning) set the stage for much of the generative AI we see today. He noted that while pure fact-based queries are the bread and butter of Google search as we know it today, questions in which there is “no one right answer” could be served better by generative AI, which can help users organize complex information and multiple viewpoints. 

Their new conversational AI, Bard, which is built from a smaller model of a language tool they developed in 2021 called LaMDA, is meant to, for example, help users weigh the pros and cons of different car models if they were looking into buying a vehicle. Bard is currently with a small group of testers, and will be scaling to more users soon. 

[Related: Google’s own upcoming AI chatbot draws from the power of its search engine]

However, the debut didn’t go as smoothly as the company planned. Multiple publications noticed that in a social media post the Google shared on the new AI search feature, Bard gave the wrong information in response to a demo question. Specifically, when prompted with the query: “what new discoveries from the James Webb Space Telescope can I tell my 9 year old about,” Bard responded with “JWST took the very first pictures of a planet outside of our own solar system,” which is inaccurate. According to Reuters and NASA, the first pictures of a planet outside of our solar system were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004.

This stumble is bad timing given the hype yesterday around Microsoft’s announcement that it was integrating ChatGPT’s AI into the company’s Edge browser and its search engine, Bing. 

Despite Bard’s bumpy breakout, Google did go on to make many announcements about AI-enhanced features trickling into its other core services. 

[Related: Google’s about to get better at understanding complex questions]

In Lens, an app based on Google’s image-recognition tech, the company is bringing a “search your screen” feature to Android users in the coming months. This will allow users to click on a video or image from their messages, web browser, and other apps, and ask the Google Assistant to find more information about items or landmarks that may appear in the visual. For example, if a friend sends a video of her trip in Paris, Google Assistant can search the screen of the video, and identify the landmark that is present in it, like the Luxembourg Palace. It’s part of Google’s larger effort to mix different modalities, like visual, audio, and text, into search in order to help it tackle more complex queries

In the maps arena, a feature called immersive view, which Google teased last year at the 2022 I/O conference, is starting to roll out today. Immersive view uses a method called neural radiance fields to generate a 3D scene from 2D images. It can even recreate subtle details like lighting, and the texture of objects. 

[Related: Google I/O recap: All the cool AI-powered projects in the works]

Outside of the immersive view feature, Google is also bringing search with live view to maps that allows users to scope out their surroundings using their phone camera to scan the streets around them, and get instant augmented reality-based information on shops and businesses nearby. It’s currently available in London, Los Angeles, New York, Paris, San Francisco and Tokyo but will be expanding soon to Barcelona, Dublin and Madrid. For EV drivers, AI will be used to suggest charging stops and plan routes that factor in things like traffic, energy consumption, and more. Users can expect these improvements to trickle into data-based projects Google has been running such as Environmental Insights Explorer and Project Air View

To end on a fun note, Google showcased some of the work it’s been doing in using AI to design tools across arts and culture initiatives. As some might remember from the last few years, Google has used AI to locate you and your pet’s doppelgängers in historic art. In addition to solving research challenges like helping communities preserve their language word lists, digitally restoring paintings and other cultural artifacts, and uncovering the historic contributions of women in science, AI is being used in more amusing applications as well. For example, the Blob Opera was built from an algorithm trained on the voices of real opera singers. The neural network then puts its own interpretation on how to sing and harmonize based on its model of human singing. 

Watch the entire presentation below: 

Update on Feb 13, 2023: This post has been updated to clarify that Bard gave incorrect information in a social media post, not during the live event itself. This post has also been updated to remove a sentence referring to the delay between when the livestream concluded and when Google published the video of the event.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.