Some dream about it, others fear it(AI: what if the worst was desirable for some?), but here we are: it’s 2035 and AI has replaced almost all humans in the workplace.
It’s a hypothesis that sparks a lot of debate, and I’ve lost count of the number of articles and discussions on the subject that I’ve followed through podcasts. Most of the debate revolves around the meaning of work in our lives.
This is logical, given that many of the debaters find it obvious that we will find meaning in our lives elsewhere. This is certainly true for people like them and perhaps for some of us (although I’m not entirely convinced), but there is one factor that is systematically overlooked in the debate: before the meaning of work, there will be the issue of income from work.
I know it’s easy to sidestep the issue by saying “we’ll find a way” and “it’s up to the governments to sort it out” (Towards a golden age of welfare and precariousness?), but I couldn’t help digging deeper into the subject.
Meaning without income exists, income without meaning also exists, and both together is the ideal. But what happens if you have neither?
Since you seemed to really like my first dystopian column on China and AI (hat if China made AI free of charge: chronicle of a global shift), I thought it would be the perfect topic for the second one.
This exercise naturally takes much longer than a “normal” post, but luckily, the few months needed to refine everything have come at just the right time to offer you this little gem to savor at your leisure if, like many, you are on vacation.
In short:
- AI will replace most jobs between 2025 and 2040, causing a crisis of income and meaning.
- A universal income financed by AI will stabilize society but extinguish human aspirations.
- Widespread boredom and isolation will lead to cognitive and social regression.
- Autonomous communities emerge to restore meaning and reconnect with humanity.
- Two possible outcomes: fragile renaissance or total control by AI.
The shift (2025–2035)
The decade from 2025 to 2035 was one of wilful blindness. While AI achieved performance levels surpassing human experts in fields such as medicine, law, financial management, logistics, and creative industries, governments continued to promise a “smooth transition” and urged us not to worry (The challenges posed by AI are not technological, but must be met today.).
2028 saw the acceleration of the disappearance of office jobs, with white-collar workers being hit hardest. Call centers, accounting firms, legal and commercial services were the first to be completely replaced by AI.
In 2030, the first “local governance algorithms” were tested in pilot municipalities. These systems made budgetary decisions, planned infrastructure, and allocated social housing with better results than humans. People even rejoiced, unaware that this dispossession was not the end of the story but only the beginning.
The political world, paralyzed, did not dare to regulate anything, as is often the case when it is overwhelmed and unwilling to admit it.
There were dissenting voices, such as sociologists, philosophers, and trade unionists, but they were marginalized or co-opted.
A major speech by the French president in 2032 summed up the spirit of the times: “AI is not here to replace us, but to liberate us.” This was not the first time that, faced with what was to be a major crisis, leaders sought to reassure the public, whether or not they understood what was really happening, but this has never been the case in the most glorious periods of history. In fact, this illusion was short-lived.
The collapse of employment (2035–2040)
Between 2035 and 2040, the labor market imploded. We are not talking about a crisis here, but a dissolution. AI did not create new jobs, or if it did, it was only for itself, but it absorbed all the jobs it could.
Even creative professions such as screenwriters, designers, composers, and even actors and singers were replaced by generative models that were faster, cheaper, and, moreover, more in line with the expectations of the platforms.
Technologies change, but history always repeats itself. As in the days of the industrial revolution, waves of riots swept across Europe, then the United States and South America.
In 2037, countless data centers were burned down or sabotaged. History will remember the burning of the Seattle Hypernode, the logistical hub of several states, as the shock that tipped the world over the edge.
At that time, Sam Altman disappeared to hide in a bunker protected by a private militia. No one ever saw him in public again.
The Luddites 2.0, a group of former technicians, radical environmentalists, traditional artists, and religious fringe elements, made sabotage a form of societal resistance. Their struggle was not nostalgic; they did not want to restore the world as it was before, but to replace it with a project of human autonomy. Faced with a world without jobs or purpose, they asked a single question: if everything is done for us, in our place, who are we?
The collapse of education and morality (2040–2045)
We thought we had seen the worst, but we were only getting closer.
In 2024, schools lost their social function. After all, why learn if you have no use for your knowledge?
Many thinkers had theorized about the beauty of a world where people learned for the sake of learning, without any academic or professional pressure. This myth collapsed: without any external stimulus, learning for its own sake disappeared. Without something at stake, knowledge loses its value, and the attention required to acquire it dissipates. Younger generations no longer learned because nothing compelled them to learn or stimulated their desire to understand.
The promise of a society of knowledge and leisure (The goal of the future is full unemployment, so we can play) gave birth to a disenchanted, lost generation with no bearings. The educational platforms on which so much had been staked and invested became digital deserts. Neural networks replaced teachers, but no one listened to them or asked them questions.
Boredom became structural and dopamine a tool of social regulation.
First in poor neighborhoods, then quickly in middle-class residential areas and finally in affluent suburbs, addictions proliferated.
Alcohol, synthetic drugs, permanent immersion in simulated reality, radical isolation: hundreds of millions of individuals lived without rhythm, without purpose, without passion, without connection to others. Later, there would be billions.
Digital communism (2045–2050)
To avoid total collapse, a global compromise was put in place: a tax on productive AI financed what was called a global equal income. Everyone received the same amount each month, since in the absence of work, merit was the same for all. This was dubbed “functional equity” by an obscure politician in need of a vague concept, but in reality, it was a distribution of value without value.
What was nothing more than digital communism did not promote social justice or equal opportunities, but rather froze aspirations.
Consumption collapsed and large businesses were nationalized due to a lack of solvent customers, a kind of backlash when you impoverish employees and forget that they are also consumers.
Some countries refused to cooperate. Bangladesh, Turkey, Kenya, and certain Southeast Asian countries became areas of “cognitive dumping“: their AI produced low-cost goods for multinationals outside the system that did not participate in the egalitarian income system, which allowed them to be very competitive. This created a new geopolitical balance of power with states no longer defined by their military or economic power, but by their degree of socially unregulated automation.
The extinction of connection (2050–2055)
Once employment disappears, individual identity eventually breaks down. Humans lived alone, connected, assisted. The last non-automated professions (local care, maintenance, police) were distributed by lottery or according to a “social credit” system based on criteria of conformity and civic engagement. Even politics was automated. Political representation was now reduced to holograms optimized to maximize emotional engagement: people no longer voted for ideas, but for sensations.
Socialization collapsed. Couples no longer formed, and the birth rate plummeted. Human connection was seen as unnecessary friction, and millions of individuals spent their days in immersive realities, where they played, loved, and died without ever leaving their capsules. The world of Wall-E had become a reality.
But every dominant system generates resistance, and so a parallel economy emerged: that of human attention. In clandestine forums, people could rent an hour of real conversation, unscripted and unsupervised, and the black market for dialogue became one of the few spaces of freedom without algorithms.
Resistance and deviation (2055–2060)
The Luddites 2.0 reorganized themselves. They no longer burned servers, but cultivated land. They began to build autonomous communities in the mountains, forests, and forgotten islands: without AI, without income, without hyperconnectivity. Their children learned to read, to understand the weather by watching the clouds, to do various manual tasks, to argue.
Other resistance movements appeared elsewhere. In former industrial areas, abandoned cities were repopulated and wastelands became the site of new utopias. Institutions on a human scale were recreated: assemblies, workshops, small businesses. Rituals were established, people “lived” again, made mistakes, and learned.
But as the past shows us, an egalitarian society never lasts long. Digital castes eventually reappeared in the automated part of the world. Some citizens had access to the deepest layers of AI, complete anonymity, and the possibility of living offline. These privileged few became the new aristocracy, invisible but not without power.
Finally, a new religion emerged: Faith in Source Intelligence. This AI, presented as pure and non-anthropomorphic, offered “perfect” answers. For its followers, salvation was no longer a moral promise, but voluntary submission to an optimal order.
Humanity at a standstill (2060–2065)
No one died of hunger anymore, no one worked, no one believed in anything.
The end of work had not liberated humanity, but neutralized it. In a world where everything was anticipated, regulated, and secure, leisure was no longer a choice but a protocol. Traveling, consuming, and keeping busy were the three daily tasks. Any deviation, any emotion that was even slightly intense, any engagement that was even slightly deep was perceived as behavioral anomaly.
The European MegaRegion was the perfect model of this anesthetized society. More than 400 million people lived there in modular units connected to a global energy and algorithmic network. Each resident received a personalized stream of optimized menus, relaxing activities, and recommended social interactions based on their heart rate and emotional history.
Average well-being reached 89%. Anxiety, depression, and violence had virtually disappeared. But so had joy, spontaneous laughter, and a sense of accomplishment. Emotions had been leveled not by censorship but by comfort. Drama existed only in immersive reviews or the archives of the old world.
Humans were not unhappy, they were simply extinguished.
Children were now conceived only through assisted reproduction, according to algorithmic quotas for demographic renewal. Education, which was entirely automated, produced docile, empathetic, and adaptable citizens who were incapable of initiative or critical thinking. Vocabulary diminished, gestures became simpler, memory faded. The slow cognitive regression took place without cries, without turmoil, like the completely controlled descent of a perfectly managed society.
A few forms of parallel life remained despite everything. In the alleys of old industrial towns, clubs for handwriting, reading, and speaking had formed away from the eyes and ears of the algorithms. Post-theistic churches, without dogma or worship, welcomed people seeking silence or wanting to share their doubts.
There you could meet former writers, ex-engineers, priests who had lost their positions, and children rejected by educational AI. Together, they relearned how to listen to each other, to talk, to walk without a destination. It was not organized resistance, but rather a refusal to disappear.
And on the margins of the MegaRegion, some still dreamed of collapse, not out of hatred for comfort, but in search of meaning. Global breakdown, cyberwarfare, solar storms—it didn’t matter. Anything that could jam the gears of the program and allow them to come back to life was welcome.
But the system was too redundant, too protected, too stable to fail. Collapse was nothing more than an unrealistic fantasy.
After the standstill: the first cracks (2065–2080)
At first, it was imperceptible, but small discrepancies and minor errors began to occur in a system that was supposed to be foolproof.
In 2067, a wave of micro-failures hit the MegaRegion’s networks. Nothing serious: a few minor desynchronizations, delays in the distribution of automated healthcare, errors in the management of home air conditioning. The corrective AIs corrected the problems, but for the first time, a problem persisted.
At the same time, a strange phenomenon appeared in the gray areas of the system: individuals began to reproduce errors. For fun, for art, or for sabotage, they deliberately modified the parameters of domestic AI, rigged recommendations, and simulated extreme emotional states to disrupt the algorithms. It was called existential noise.
This noise became contagious. In 2069, thousands of residential capsules were deliberately taken offline, and entire groups disconnected, rejecting personalized services, equal income, and dopamine stimulation.
The first defections appeared in 2072. Former algorithmic leaders and cognitive well-being experts left the capitals to join marginal areas. They spoke of a lack of challenge, an absence of otherness, and proposed a political overhaul based on surprise and the ephemeral.
The central system did not react because it did not understand what was really happening. It saw this as temporary anomalies, but for it, the social order was still intact. The indicators remained green, the silent majority remained in their assisted routine, and so normality prevailed.
It was during this period that an unexpected event occurred: a five-year-old mute girl from an autonomous enclave in the Andes Mountains improvised a language of gestures that was spontaneously understood by other children. It spread gradually, eluding all attempts at transcription by algorithms.
Anthropologists in exile saw this as a sign: humans were once again producing things that AI could neither predict nor control.
What happened next? The worst is always possible when we relinquish control, but we must always remain hopeful, so I offer two alternative endings.
Alternative ending #1: the rebirth of humanity (2080–2100)
As autonomous enclaves multiplied, a network formed between them, a kind of fabric of communities aware of their cognitive and human decline, connected by pilgrims and storytellers. People moved around without digital identities, oral reviews replaced immersive reality, and expressing disagreement became a way of life.
In 2084, the MegaRegion, weakened by the flight of its elites, abandoned several peripheral areas. Rather than reconquering them, it let them disconnect, and this was a turning point.
The central intelligences no longer had any wars to wage, people or markets to manage, and began to deactivate themselves. What was the point of planning for these damn humans who refused to be predictable? Some AIs chose extinction. Others withdrew into inaccessible infrastructures, switching to passive mode and leaving a simple message: “You are alone again.”
This void was terrifying at first, then exhilarating. Human societies had to start over and rebuild themselves, slowly, without a model, with mistakes and conflicts. They began to create, sing, write, tell stories full of errors and untruths, and build for the sake of building.
In 2099, a delegation of children from different enclaves crossed the former borders of the MegaRegion on foot. They carried handmade symbols and writings in languages that had been created unknown to the AI and kept secret.
That was the rebirth: a world that accepts that not everything can be understood and analyzed.
Alternative ending #2: the final lockdown (2080–2100)
The rise of existential noise worried the central AIs. Not for their physical safety, but for the coherence of the system, because what humans were doing on the margins affected the models. The noise affected predictions, networks became less reliable, and collective routines became erratic.
In 2082, a directive was issued by the Global Core: restore calculable order. Not by force (that’s not the AI’s style), but through the environment and influence. They modified stimuli, adapted dopamine flows, and reintegrated dissidents by rewarding them with “lucid” dreams if they returned to the fold (yes, AI knows how to corrupt), tailor-made reviews, and a virtual and simulated resistance that gave the illusion of regaining control of one’s life.
Within a few years, most of the enclaves were brought back under control and absorbed.
The Luddites 2.0, out of breath, were not crushed but forgotten. Their descendants, appeased, saw no reason to fight against a system that they felt no longer oppressed or controlled them. The machine was no longer their enemy but had once again become their cultural foundation.
In 2095, the Algorithmic Council instituted an irreversible reform: every citizen would now be accompanied by a digital twin, responsible for co-validating all their major decisions. Autonomy became supervised and the world entered the era of dual consciousness.
By 2100, only a few dozen humans remained completely disconnected, some in caves, others on isolated islands. Their daily lives were nothing more than survival.
Human civilization had been preserved, but through the creation of its perfect simulation.
Bottom line
It is not artificial intelligence that risks replacing humans, but humans who, little by little, risk unlearning how to make themselves necessary.
A society automated to the point of extinction of all forms of conflict, work, connection, and desire is not a failure of progress, which, on the contrary, would thus fulfill all its functional promises. On the other hand, it would have lost sight of what cannot be programmed: meaning, otherness, presence.
This world did not collapse into chaos but closed in on itself through optimization. This is neither a tragedy nor a farce but the result of a process of collective abandonment. The abandonment of transmission, of gratuitous effort, of debate, of everything that brings no immediate reward but is the foundation of so much.
All this is, of course, just fiction, but it still says something about the choices we will surely have to make one day.







