Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

Google kicked off its annual developer conference, Google I/O 2024, or as CEO Sundar Pichai calls it, “Google’s version of the Eras Tour, but with fewer costume changes,” and it’s been all about Gemini. “At Google, though, we are fully in our Gemini era,” Pichai quipped at his welcome address. And he wasn’t kidding; Google went all out on all the AI announcements it had been keeping secret for months or even years.So much so that Pichai himself had to count how many times they said AI. He says 120, but we count 122.
Whether it be Android, Chrome, Gmail or Docs, every single app and service of Google is being infused with AI. Here’s a recap of the 110-minute-long keynote, noting down all the new things Google announced during the keynote, so you don’t have to go through all the AI mambo jumbo.

Google Search is getting an AI makeover

Google search is getting a custom Gemini model of its own. So, instead of showing people traditional website links, Search will now serve them with AI-generated responses, rather known as AI Overviews. Users can ask complex questions and receive comprehensive answers, plan meals and vacations, explore AI-organised results pages for inspiration, and even search using videos. These features roll out to users in the U.S. and later to other countries.
If you prefer the good old way of going through all the links, you can select the web filter that will show the traditional search result page.

‘Ask Photos’ to find the pictures in the gallery

110143642 - Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

Sometimes you want to find that one photo from that trip you remembered all of a sudden, but you can’t find it, trying all the possible ways to search it. Well, now you will have ‘Ask Photos,’ another Gemini AI powered feature that will search photos and videos for you. All you need to do is ask for that naturally like you would ask a person. One could also use the AI in Photos with tasks like creating trip highlights or personalised captions.

Gemini 1.5 Flash, it’s faster and more efficient

Topping up the Pro, Google introduced Gemini 1.5 Flash, its newest and fastest addition to the Gemini model family. This model is optimised for high-volume, high-frequency tasks at scale, offering improved cost-efficiency and a breakthrough long context window. Despite all this it still excels at summarization, chat, image and video captioning, and data extraction from long documents and tables. Google says it could achieve this because of the new training process, it calls “distillation” by the larger 1.5 Pro model.

Gemini 1.5 Pro is getting some improvements, too

Google says it has significantly enhanced its Gemini 1.5 Pro model, extending its context window to 2 million tokens. It’s better in code generation, logical reasoning, planning, multi-turn conversation, and audio and image understanding, resulting in better performance on public and internal benchmarks. Further, there is an increased control over model responses for specific use cases and the ability to reason across images and audio for videos. The new model is being integrated into Google products such as Gemini Advanced and Workspace apps.

Gemini Nano goes multimodal

The smallest AI model of Google, which goes by the name Gemini Nano, is being turned multimodal, which “turn[s] any input into any output” to “understand the world the way people do — not just through text, but also through sight, sound and spoken language.” The Gemini Nano is coming to Chrome and will power the AI features on Android, as well.

Google’s Project Astra imagines AI as your everyday agent

110143823 - Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

Project Astra is what Google’s Deepmind unit describes as a universal AI agent, which aims to be a helpful assistant in everyday life by understanding and responding to the complex and dynamic world. The Astra, powered by the Gemini 1.5 Pro model, can recognize objects, explain code, find lost items, and even identify your locality. Google said that it plans to integrate some of these features into the Gemini app later this year, hinting at future applications in smart glasses and other form factors.

Google’s Gemini AI can be turned into different “Gems”

How about having a specific version of your AI chatbot that’s perfect for that one specific job? Well, that’s what Google’s “Gems” ought to do. It allows users to create personalised versions of its Gemini AI chatbot tailored to specific tasks and personalities, such as a gym buddy, sous chef, coding partner, or creative writing guide, by simply describing what they want the Gem to do and how they want it to respond, with the feature rolling out soon to Gemini Advanced subscribers.

Gemini comes to Gmail, Docs, and other Workspace apps

110143889 - Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

Google is integrating Gemini into its Workspace apps’ sidebar, including Gmail, Docs, Sheets, Slides, and Drive. allowing the AI to access user data across these apps to connect and assist in completing multi-app tasks seamlessly. The feature is available to early testers and will roll out to paid Gemini Advanced subscribers next month, with plans to expand to more users in the future.
Google also showcased additional AI features, such as AI chatbots for homework assistance and PTA meeting summaries, a Gemini-powered AI Teammate for coordinating communications and managing projects, and Gems, a new feature for setting up automated routines for various digital tasks.

A smarter Android

With Gemini Nano right running on the device, your phone should be able to understand you much better and also help you avoid being scammed. Yes, it can listen to your calls (given you allow it) and tell if the person on the other side is a scammer.

110143909 - Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

Remember Google Assistant? It’ll soon be a thing of the past. Google is replacing it with Gemini, which is being made more contextually aware and integrated deeper into Android. The Gemini can now better interact with content on the phone’s screen, making it competent in answering questions about videos, PDFs, and whatever’s opened on your phone. Gemini can generate images that you can drag and drop into chats or emails.
Circle to Search is getting better at maths and science, so one could use it for homework also. Then, Google Lens can also search with video.

Gemini is getting natural at conversations

110144012 - Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

For starters, you will be able to chat with Gemini directly within the Google Messages app. Secondly, Google is bringing what it calls Gemini Live, which will let people speak with the AI using a natural-sounding voice, interrupt mid-response with clarifying questions, and even use their camera to discuss what they see around them, making the interaction more intuitive and similar to real-life conversations. However, it’ll be available to only Gemini Advanced subscribers.

Google takes on Sora with Veo, an AI that can create real-life videos

110144050 - Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

Touting it as its most advanced video generation model, Google promises that Veo can create high-quality 1080p resolution videos in various cinematic and visual styles with a prompt.
Google has collaborated with filmmakers and creators like Donald Glover and his studio Gilga to develop the model and make it available to select creators in private preview through VideoFX. In the future, Google plans to integrate some of its capabilities into YouTube Shorts and other products.

Google’s new image-generation AI will make fewer mistakes

Image 3 is Google’s newest and, as per the company, quality text-to-image model. Google claims the new model generates photorealistic, lifelike images with incredible detail and fewer visual artefacts than prior models. The model is now available to select creators in private preview through ImageFX and will soon be integrated into Vertex AI.

SynthID now watermarks AI-generated videos and can also detect them

With all this AI-generated content out there, it is really important to differentiate between what’s real and what’s artificially generated. So, Google is improving its SynthID, which is already available for photos, into the Veo, and it will also be able to detect AI-generated videos.

A new pair of Google Glasses

110144081 - Google I/O 2024: From Search’s AI makeover, Gemini in Google Photos, Docs, and Gmail, to AI upgrades for Android, here’s everything Google announced |

During Google’s I/O 2024 keynote, the company subtly revealed a new prototype of augmented reality (AR) glasses in a video demonstration for Project Astra, and that raised everyone’s eyebrows. While Google executives Demis Hassabis and Sergey Brin didn’t tell if the company is working on a new pair of AR glasses, but the two hinted at the potential for AI-powered AR glasses as a future form factor, suggesting that the technology could be a “killer app” for hands-free interaction with generative AI, despite Google’s previous struggles with its Google Glass

function loadGtagEvents(isGoogleCampaignActive) { if (!isGoogleCampaignActive) { return; } var id = document.getElementById('toi-plus-google-campaign'); if (id) { return; } (function(f, b, e, v, n, t, s) { t = b.createElement(e); t.async = !0; t.defer = !0; t.src = v; t.id = 'toi-plus-google-campaign'; s = b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t, s); })(f, b, e, 'https://www.googletagmanager.com/gtag/js?id=AW-877820074', n, t, s); };

function loadSurvicateJs(allowedSurvicateSections = []){ const section = window.location.pathname.split('/')[1] const isHomePageAllowed = window.location.pathname === '/' && allowedSurvicateSections.includes('homepage')

if(allowedSurvicateSections.includes(section) || isHomePageAllowed){ (function(w) { var s = document.createElement('script'); s.src="https://survey.survicate.com/workspaces/0be6ae9845d14a7c8ff08a7a00bd9b21/web_surveys.js"; s.async = true; var e = document.getElementsByTagName('script')[0]; e.parentNode.insertBefore(s, e); })(window); }

}

window.TimesApps = window.TimesApps || {}; var TimesApps = window.TimesApps; TimesApps.toiPlusEvents = function(config) { var isConfigAvailable = "toiplus_site_settings" in f && "isFBCampaignActive" in f.toiplus_site_settings && "isGoogleCampaignActive" in f.toiplus_site_settings; var isPrimeUser = window.isPrime; if (isConfigAvailable && !isPrimeUser) { loadGtagEvents(f.toiplus_site_settings.isGoogleCampaignActive); loadFBEvents(f.toiplus_site_settings.isFBCampaignActive); loadSurvicateJs(f.toiplus_site_settings.allowedSurvicateSections); } else { var JarvisUrl="https://vsp1jarvispvt.indiatimes.com/v1/feeds/toi_plus/site_settings/643526e21443833f0c454615?db_env=published"; window.getFromClient(JarvisUrl, function(config){ if (config) { loadGtagEvents(config?.isGoogleCampaignActive); loadFBEvents(config?.isFBCampaignActive); loadSurvicateJs(config?.allowedSurvicateSections); } }) } }; })( window, document, 'script', );

You must be logged in to post a comment Login