Now Loading

Google is all set to transform the whole internet once more. The business's whole commitment to AI.

blog
Image credit: additional resources.

Over the last year, the greatest mystery surrounding Google has been related to its main product, which is still its main source of income: can AI chatbots replace search engines? The business provided some clarification in May when it revealed that AI Overviews—Google's new term for AI-generated answers—would soon appear at the top of users' results pages. The video said, "In the next era of search, AI will do the work so you don't have to." It's a step toward a time when the internet only answers questions without offering any links or hints.

Any changes made to Google's search engine have an impact. One of the primary ways that people engage with the internet, their computers, and their phones is through the search box. Since Google plays a significant, controversial, and potentially evolving role on the web as a distributor and monetizer of attention, this partial move has been viewed by the media as a watershed moment.

But after over a year of testing, Google's AI-search experiment hasn't seemed, at least not to me, like a complete makeover—rather, it's just one more questionable result on a results page that gets more and more disorganized. These days, I scan the AI response quickly enough to see that it is occasionally blatantly incorrect. Perhaps it will improve, as Google says; perhaps consumers' enjoyment matters more than quality. In any case, the debate about whether Google is about to completely overhaul the online economy and whether or not synthesized summaries would deliver a last, irreversible blow to publishers and other Google-dependent platforms won't last long.

When it comes to search, it's very obvious what Google wants from AI: to keep its top spot and fight off rivals like OpenAI. However, search was only one of the many goods and services the business unveiled at Google I/O, its developer conference, in May. The other goal of the search changes was to inform the public that the corporation is fully committed to artificial intelligence (AI), a wager that might significantly shift privacy standards once more in favor of businesses like Google.

Google is releasing, or at least hinting at, new tools for creating images, music, and videos. A new voice assistant will be available that may respond to inquiries regarding images captured by your device's camera or screen. Enhancements will be made to assistants who can respond to inquiries concerning your files, a just concluded meeting, or the items in your inbox. A tool will be available that can instantly screen phone conversations for terms linked to frauds.

A significant portion of these features are in the live-demo stage, and many more are in the works as proposals or even marketing. Google appeared to be declaring, "We're doing it too, and in fact, we were doing it first," in response to whatever its rivals were doing with artificial intelligence.

However, an alternative narrative is emerging that casts AI less as a standalone technological advancement with which Google is attempting to reconcile its relationship and more as its creator. A victim? Both? rather as an extension of one of the company's distinctive characteristics, which a past CEO had characterized as "Get right up to the creepy line and not cross it." A lot of these products state clearly what they can accomplish for you.In return, you get more complete access to all facets of your online existence. Whatever the case, the industry's haste to introduce AI is also an attempt to get greater access and data, supposing that people will relinquish them.

While uncommon, such instances are not unheard of. Google encountered criticism in 2004 for inserting contextual advertisements inside users' inboxes, which at the time was viewed by some as a brazen, arrogant infraction, soon after the debut of Gmail. An organization of privacy groups published an open letter to Google executives stating that "scanning personal communications in the way Google is proposing is letting the proverbial genie out of the bottle." Soon, this would seem both archaic and prophetic. In retrospect, it's obvious that people were content to participate in this transaction to the degree that they could comprehend it, but this was the actual way things were going to operate online.

By 2017, Google had stopped using email scanning to target contextual advertising. The company already provided maps, office software, a smartphone operating system, and dozens of other products that relied on user data collecting. At the time, Google was almost living in our pockets, its software on billions of phones, allowing users to conduct an increasing amount of their lives through it, so this gesture toward privacy felt a little out of place.

Since then, changes in privacy norms have typically happened covertly and with more nuanced effects. When a smartphone user opens their device one day, they discover that their photo library has been sorted into person-based albums after being scanned for faces. Oh. A Zoom user elsewhere discovers that their meeting is automatically being recorded into a searchable manner while having a heated discussion with a coworker. Umm.

Tech firms have a chance to further change things with AI assistants, which are innovative tools that sometimes appear miraculous and that businesses like Google are eager to sell as such. These tools rely on access to data that is frequently already authorized by users; for example, it's not shocking that a Google assistant requests access to documents stored on Google Docs, but it's also not totally unimportant, and it raises questions about how much the idea of an all-knowing assistant can influence people's expectations regarding their level of digital autonomy.

Google has previously used some rather weak justifications to support its claims that gathering user data is essential, such as the need to "help show you more relevant and interesting ads." Its arguments have mostly been presented as software, to which its users have either adopted or rejected them. AI assistants, on the other hand, argue more forcefully for the benefits and necessity of user surveillance. Obviously, if the assistants can see what the user can see, or at least what's on their displays, then they perform better. The conflict that users may experience between a more intelligent and human-like assistant and its demands for access to ever-more private content is lessened by the fact that they are not quite here yet, if and when they do come.

Similar to how Google was able to begin producing believable results on its own thanks to years of web data collection through Google Search, AI assistants claim to be able to assist you in operationalizing the massive amount of data that Google has been gathering for non-marketing purposes, with your technical consent. From a limited perspective, this seems like a better deal because the customer receives a helpful chatbot in exchange for at least part of the value of the massive personal corpus. On a broader note, though, the appearance of choice ought to sound familiar. (There are indications that Google recognizes and is mindful of privacy issues: It made clear that rather than transferring data to the cloud, its call-screening capability, for instance, would rely on on-device AI.)

As the demands of the IT sector are rationally and neatly linked, the widespread belief that the AI surge poses a disruptive danger to the internet companies warrants more skepticism than it has received thus far. According to the people who are developing these companies, the key to realizing the full, glorious potential of large-language-model-based AI, at the level of personal assistants or in the service of achieving machine intelligence, is simply more access to more data. These companies' current businesses were built on the acquisition, production, and monetization of large amounts of highly personal data about their users.

It's more of an idealistic picture of a future where our conventional ideas of what belongs to us have been completely redefined than it is a conspiracy theory or even a calculated scheme. AI companies have claimed that in order to live up to their promises, they must consume massive volumes of both public and private data. A more intimate version of the same claim is being made by Google, which says that it will soon be able to assist with everything. It just asks for everything in return from you.