What do the AI giants really want?

-

Everyone is talking about them, they are the center of attention, their products are being used more and more, but ultimately we know little about them and wonder what their intentions are. Do we really know what AI giants such as OpenAI, Microsoft, Google, Meta, Amazon, and Anthropic are trying to achieve?

Sometimes we may even wonder if they themselves know where they are going or if they are devising their strategy as go along, discovering the power of what they are building. After all, as Sam Altman, founder of OpenAI, the parent company of ChatGPT, said:

We literally had no idea we were ever going to become a company. Like the plan was to put out research papers. But there was no product, there was no plan for a product, there was no revenue, there was no business model, there were no plan for those things.” (An Interview with OpenAI CEO Sam Altman About Building a Consumer Tech Company)

But beyond the talk of progress, security, democratization, and productivity, these technologies are already having an impact on our work, our relationship with information, and our decisions, which deserves some consideration.

In short:

  • AI giants want to become the main interface for our digital activities by integrating AI into all tools.
  • They are ultimately aiming for highly autonomous systems, sometimes even general intelligence, without consensus on what this entails.
  • AI is already replacing or transforming human functions and challenging certain existing economic models.
  • An advertising model is emerging, raising questions about transparency, manipulation, and trust.
  • AI is not just a technical issue, but a challenge for governance and control.

Dominating the next layer of software

The GAFAM companies in AI want to establish themselves as the go-to platform for interacting with the digital world, a bit like a one-stop shop for our digital needs. 

Whether you want to use a search engine, book a plane ticket, buy a product, use an ERP, or write a text, the tool you use will potentially go through artificial intelligence, which will then manipulate the tools for you.

For some, the risk is moderate: in a business, AI will not replace your ERP, but will become the way you interact with it. If, on the other hand, you are a search engine, such as Google, you may eventually be replaced (Can This A.I.-Powered Search Engine Replace Google? It Has for Me.How AI is changing the rules of web traffic), and the same applies if you are a content provider ([FR]What if AI broke the internet? How agents are transforming web browsing and its business model).

The goal is not simply to sell tools, but to become the go-to place for thinking, producing, researching, solving, coding, planning, buying, and more.

What’s next? Today, ChatGPT is improving its product comparisons, offering affiliate links, and will soon be making purchases for you. But what’s to stop it from trying to replace retailers and Amazon, just as it is trying to replace Google?

The challenge is not so much the breadth of these players’ footprint (their omnipresence) as its future depth (how far they will go in the value chain).

They are not aiming to sell a service but to build what could be called a cognitive operating system, integrated everywhere: in our communication tools, office software, business tools, browsers, and operating systems.

Approaching “superintelligence,” each in their own way

Some players, such as OpenAI, DeepMind, and Anthropic, have a clear ambition: to achieve general artificial intelligence (AGI). However, there is no consensus on how to define AGI or when it can reasonably be expected to be achieved, assuming it is possible.

However, it seems that the bar is constantly being lowered on this issue for fear of discouraging investors with a constantly receding horizon.

But others are taking a more pragmatic approach, with more measured ambitions. In any case, the goal is no longer to create specialized assistants, but to develop systems capable of reasoning, learning, planning, and acting in complex environments, with increasing autonomy and, one day, perhaps even total autonomy.

This autonomy will not only need to be monitored, but decided who controls it.

Replace or absorb?

Behind AI agents and assistants, a substitution dynamic is well underway.

Workers performing routine tasks are the first to be targeted: writing, analysis, reporting, support, and simple programming are tasks that are being absorbed by AI.

Traditional search engines are under attack from conversational assistants, which bypass the results page and threaten their revenue, as they do for the media.

SaaS platforms could be disintermediated by AI integrated directly into operating systems or productivity suites. Their survival is not in question, but they will be less visible, which may not be a bad thing in terms of user experience. They are fighting back by offering their own AI and conversational agents, but with a few exceptions (Workday Salesforce Partnership: Teaming Up For Enterprise AI), they are not addressing the central issue of data interoperability, which they lock away in their own silos.

Outsourced support functions (after-sales service, moderation, transcription) are gradually being automated.

And tomorrow, it may be certain intermediate managerial functions whose perceived value will be reduced to a role that AI can orchestrate.

Replacing humans may not always be the initial ambition of these platforms, but it is the only marketing argument that businesses can hear when buying the product, even though some argue that AI is being used in the wrong use cases and that, in the end, the gains in growth and productivity will be lower than expected (Daron Acemoglu: What do we know about the economics of AI?).

A shift towards an advertising model?

Of course, these players are faced with the question of an advertising model that made their predecessors so successful. You know the famous saying, “if it’s free, you’re the product.”

Some are already there, others are thinking about it, but all will be tempted.

Google is testing sponsored results in its conversational AI. Meta is injecting AI into its messaging services and connected glasses, and OpenAI is offering affiliate links.

In this area, economic pragmatism will prevail over ethical considerations. Returning to Sam Altman’s interview, he tells us:

I am more excited to figure out how we can charge people a lot of money for a really great automated software engineer or other kind of agent than I am making some number of dimes with an advertising based model.

But, still on the subject of advertising

I hope not. I’m not opposed. If there is a good reason to do it, I’m not dogmatic about this. But we have a great business selling subscriptions.

It’s up to each individual to understand what they want.

The advertising model applied to AI raises a number of questions:

Contamination of the response: if the AI’s response is influenced by a sponsor, how can we know?

Soft manipulation: AI becomes an invisible lever of algorithmic influence.

Cognitive impoverishment: by focusing on what sells, we neglect what enriches us intellectually.

Loss of trust: if AI becomes a tool of persuasion, it ceases to be a tool of assistance.

And above all: at what point does AI stop answering a question and start selling us an intention with the aim of influencing our decisions? (AI could map and manipulate our desires, say Cambridge researchers)

What all this tells us

AI is not a technology like any other, and now more than ever, the stakes are not just technological but structural, strategic, and cultural.

AI is therefore becoming an invisible infrastructure that embodies more than ever Nicholas Negroponte’s statement that “Computing is not about computers anymore. It’s about living.” This makes the subject even more sensitive, as one might think that whoever controls AI will control our lives as AI becomes the default interface between us and information, knowledge, and action.

The real issue will not be innovation, but its use, adoption, control, and governance.

Bottom line

AI is not just a technical or HR issue. It is a matter of organizational design, cognitive sovereignty, and logical collectivechoices.

In the short term, AI giants want to capture our attention, our time, and our data, and in the medium term, the framework through which we interpret reality.

And in the long term? Perhaps they don’t know themselves yet, but maybe letting them decide isn’t the best option.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
26SubscribersSubscribe

Recent