Dystopia: what if artificial intelligence broke the web and the Internet?

-

In my “dystopias” series, I wanted to try to describe fictional worlds where certain trends in today’s world had been taken to extremes. On reflection, I wonder whether this post really belongs in this category, because ultimately it seems to me that the question is not so much whether the scenario described here will happen, or even when, but how quickly.

In any case, over the past few years, the transformations brought about by generative artificial intelligence and then autonomous agents have gradually challenged the very foundations on which the web was built: its technical architecture, its economic models, its spirit, and its founding principles. Far from causing a clear collapse, these changes have led to a form of continuous erosion that is difficult to grasp as a whole, as it is happening simultaneously at different levels: content production, modes of information circulation, conditions of access, monetization models, cognitive quality of resources, and diversity of viewpoints.

This is therefore not simply a transitional or adolescent crisis, but rather a paradigm shift, in the sense that the implicit rules of the web ecosystem are being reconfigured without its main players, users, producers, and regulators having really been able to debate or anticipate the consequences.

In short:

  • Artificial intelligence is profoundly transforming the web, not through sudden disruption but through the gradual erosion of its technical, economic, and cultural structures.
  • AI agents are changing how we use the web: they’re replacing human navigation with automated extraction, reducing content to reformulated and decontextualized data.
  • This evolution is undermining the web’s attention-based economic model, reducing content monetization and impoverishing qualitative and diverse production.
  • The growing control exercised by a few large platforms centralizes intermediation, making producers dependent on opaque criteria and reducing the visibility of content.
  • The widespread use of AI leads to cognitive passivity among users and a gradual impoverishment of content, posing a structural risk to the future of the web.

From the transformation of interfaces to that of uses

This change was not brought about by a deliberate decision and was not claimed as such. It is the result, on the one hand, of the fast increase in the capabilities of language models and generative systems and, on the other hand, of a growing sense of fatigue expressed by users faced with a web that has become dense, disordered, cognitively demanding, and sometimes frustrating in terms of navigation.

In this context, the emergence of agents capable of responding on behalf of the user, summarizing information, making purchases, or searching for information on their behalf has been rightly perceived as an efficiency gain. Gradually, expectations have evolved: we no longer search for information but expect an answer, we no longer visit a site but consult a summary, we no longer click on links but delegate the entire process to a conversational interface.

This change goes beyond the functional and in fact constitutes a change in the very nature of the web: by replacing human navigation with automated extraction, AI agents are transforming the web into a simple database, in which content is no longer the object of interaction, but a resource to be extracted, reformulated, and redistributed.

Traffic remains steady but no longer supports the attention economy

The first to notice the effects of this transformation were the content producers themselves, particularly website publishers, digital media outlets, and freelancers. While traffic volume appeared to remain steady, it quickly became apparent that the very nature of this traffic was changing: a growing proportion of visits were no longer coming from human users, but from automated agents, which accessed pages to extract information without activating monetization mechanisms or complying with distribution conditions.

In other words, the content was still being used, but less and less was being read in its original form. It was reworded, summarized, taken out of context, and then reproduced elsewhere, often without attribution, without disclosure of the source, and without direct economic value for the author. For small producers, this often meant an immediate loss of revenue, a damaging decline in visibility, or even a gradual exit from the ecosystem. For larger producers, the response has been to multiply access barriers: technical restrictions on robots, proprietary formats, content reserved for subscribers, or formats designed specifically for AI. But while these strategies may slow down the phenomenon, they do not challenge it, and the trend remains unchanged: content is absorbed, used, and redistributed, but no longer remunerated.

In this context, the economic logic on which much of the web was based is slowly but surely collapsing. Clicks no longer have any value, and the attention economy, already weakened by information saturation, is losing one of its pillars.

A qualitative decline that is difficult to reverse

Beyond the issue of funding, the very nature of the content produced and distributed is beginning to change profoundly. When an article, blog post, analysis, or report can no longer find its audience through “normal” consultation, it becomes less relevant and less viable to continue producing it.

This leads to a dynamic of qualitative impoverishment, which is accentuated as long, nuanced content disappears or becomes scarce. Short formats that can be immediately interpreted and summarized in a few sentences are favored, not because they are better understood, but because they are better processed by AI models.

This phenomenon puts a kind of pressure on the stylistic, cultural, and linguistic diversity of the web. Anything that cannot be easily summarized will not be exposed, and anything that resists simplification is seen as noise.

Furthermore, end users often no longer have access to sources, or only on an optional basis. They no longer know who is speaking, in what context, with what intention, or with what potential biases. The AI agent produces a response, not a reflection or a path, and the web is shifting from a system of links to a series of fragments without context.

Increasing centralization

This transformation benefits a small number of players who now control interfaces, models, access to content, and how it is used. They decide how content is extracted, interpreted, and displayed. They capture usage data, define what is seen and what is not, and influence behavior and even thoughts.

Search engines, once open spaces, are being replaced by closed, personalized, non-auditable agents. The public web is becoming a mere technical backdrop, whose diversity is no longer perceived by users, while producers must adapt their content to filters whose criteria they do not know and whose effects they cannot control.

In this context, the best-equipped publishers manage to optimize their visibility, sometimes through partnerships, while others rely on random rankings or gradually drop out of the game.

Cognitive passivity becoming the norm

On the user side, the widespread adoption of AI agents is quickly changing how people use technology. The convenience is real: responses are immediate, easy to understand, and often sufficient. But this apparent efficiency comes with a significant reduction in cognitive effort. The habit of exploring disappears, as does the habit of thinking and analyzing.

This passivity is not a conscious choice, but becomes established because it is simpler and we are, by nature, inclined to take the path of least resistance. At the same time, however, it undermines essential skills related to critical judgment, source verification, and the formation of informed opinions. Exposure to conflicting points of view decreases, and with it, the ability to debate in an informed and constructive manner (Degenerative artificial intelligence: when AI makes us unlearn how to think).

In some contexts, this trend is even reinforced by institutional decisions: automation of public communication, widespread use of AI tools in education, and synthesis generated for official documents. Artificial intelligence no longer simply filters access to the web, but structures access to information itself.

An inevitable impoverishment of content

As human production declines or becomes less accessible, AI models are increasingly being trained on content generated by other AIs.

By feeding on synthetic, simplified content with a uniform style, models lose lexical diversity, variety of arguments, depth, and stylistic variety. We are not talking here about a decline in the performance of these models, but rather the quality of what they produce.

The logic was predictable, however: less human content means more AI content, which means AI trained on AI content, which means an impoverishment of the model and, ultimately, an impoverishment of the web. The cycle is set in motion but it remains to be seen whether it is reversible.

A trajectory that can still be changed

The web will not disappear, but it could lose much of what made it so rich: the diversity of voices, the heterogeneity of formats, the possibility of exploration. It could become a background system, useful, exploited but not consulted.

It is still possible to avoid this scenario. This requires developing auditable models, transparent algorithms, fair redistribution mechanisms, and, finally, recognizing that human attention, slowness, and the confrontation of ideas have a value that no algorithm can replace.

Bottom Line

The idea of a web broken by artificial intelligence is not a rhetorical exercise. On the contrary, it refers to a dynamic that is already underway, in which the technical, economic, and cognitive foundations of the internet are gradually being redefined without shared governance, without debate, and without really asking ourselves how far we are going to go.

This is a trend that can still be contained, provided that its structural causes and effects are recognized.

Artificial intelligence is not destroying the web, but transforming it according to its own logic: extraction, synthesis, optimization. It is up to us to decide whether this logic is good or bad for society.

To answer your questions

How is AI changing the way the web works?

AI is transforming browsing: instead of visiting websites, users receive summary responses generated by agents. The web is becoming a database exploited by machines rather than an interactive space. This evolution is redefining the implicit rules of the network, affecting access to information, the visibility of sources, and economic models, without any real collective debate or anticipation of the consequences.

Why are web business models under threat?

The web economy was based on clicks and human attention. However, AI agents extract information without generating advertising revenue or promoting sources. Content continues to be used but is no longer remunerated, which weakens publishers. Small producers lose visibility and revenue, while larger ones attempt to restrict access or create proprietary formats, without halting the underlying trend.

What effect does AI have on content quality?

Long, nuanced formats are becoming less visible because they are difficult for AI to summarize. Short, easily exploitable content dominates, leading to stylistic and cultural homogenization of the web. Users mainly access pre-formulated answers, without sources or context, which impoverishes the diversity and depth of information available online.

What dangers does centralization through AI interfaces pose?

A few players now control access to and use of content. Unlike open engines, their systems are opaque and define what is visible and what is not. They capture usage data and impose their filters on producers, creating strong dependency and reducing the plurality of the web. This increases the risks of concentration and limits freedom of access to information.

Is the impoverishment of the web inevitable?

The risk exists because less human content means more AI-generated content, which then feeds into the models, accentuating simplification. This cycle threatens the richness of the web. However, it can be mitigated by transparent models, fair redistribution, and recognition of the value of human contributions, which are the only guarantees of diversity and depth.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
27SubscribersSubscribe

Recent