About the author
Douglas Adams is best known for The Hitchhiker’s Guide to the Galaxy, a science fiction work that combines absurdity and satire to question our relationship with the modern world.
Passionate about scientific and futuristic topics, he enjoyed mocking bureaucratic foibles and took a critical view of the promises of technological progress.
Historical and intellectual context
This quote comes from the context of the emergence of connected computing. In the 1990s, digital technologies left the laboratories and large businesses to reach the general public. Personal computers, email, and the first web browsers: the technological landscape was changing quickly, but remained uncertain, unstable, and sometimes difficult to understand. The promises were numerous and the failures just as frequent. In this context, talking about technology often meant referring to something that was still in the works or unproven.
But Adams’ remark goes further. It also questions the way we name and perceive technological tools. It anticipates what many researchers in sociology, innovation, and even philosophy have since highlighted: technicality is not an objective quality of an object, but a social construct, situated in a specific time and use. What we call “technology” is what we have not yet integrated culturally.
Technology is a word that describes something that doesn’t work yet.
In short:
- Douglas Adams uses humor and satire to criticize our relationship with technology, pointing out that what we call “technology” is often something that doesn’t work yet or that we haven’t yet integrated into our culture.
- The persistence of the expression “new technologies” reveals a form of confusion between real innovation and social perception, keeping certain tools in a position of exception and preventing critical analysis of their use.
- This lexical and conceptual vagueness hinders relevant technological governance and masks the profound transformations brought about by tools that have become invisible because they have been trivialized.
- The phenomenon of “technological hype,” as described by the Gartner curve, illustrates the cycles of enthusiasm and disillusionment that influence adoption decisions without always taking into account actual uses and the necessary acculturation.
- As artificial intelligence becomes a functional infrastructure, it is slipping out of the public debate, even though thinking about technology requires constant questioning of its uses, effects, and the societal choices it underpins.
Explanation and implications
The interest of this quote lies, at least for me, in a paradox: we call “technology” something that does not yet work, while, paradoxically, we continue to talk about “new technologies” for things that have been working for decades.
The simple fact that we still talk about “new technologies” to refer to computers or the internet, which have been around for 30 or 40 years, reflects a kind of short-sightedness, a disconnect between the actual age of these tools and the way we perceive and talk about them, which can be interpreted in different ways.
On the one hand, it reflects a tendency to freeze certain innovations in a state of permanent exception. By calling them “new” we perpetuate the idea that they are separate and still require specific support, vigilance, or even distance. It is a way of not trivializing them, but also of not questioning them, since they are part of a new paradigm for which we have no reference points.
On the other hand, this persistence of the “new”” shows a fascination with change and novelty. The word “technology” is used as a cover-up, a catch-all term that prevents us from sorting out what is truly innovative, what is established, and what is obsolete. It deliberately maintains a form of confusion that prevents us from questioning novelty.
This vagueness has very concrete effects. It makes it difficult to implement mature technological governance and encourages adherence to solutions perceived as “new” without in-depth analysis. It contributes to the gradual invisibility of technologies integrated into our daily lives, even though it is at this stage that they deserve to be examined with the greatest vigilance.
A very contemporary topic
This touches on a much broader phenomenon: technological hype, or the media frenzy surrounding certain innovations perceived as disruptive. The Gartner curve perfectly describes this cycle: initial enthusiasm, peak of unrealistic expectations, phase of disillusionment, then gradual adoption. It is not just a question of perception, but influences the way we invest in technologies, integrate them into businesses, and implement their governance.
In this context, the adoption of technologies becomes as much a strategic issue as a cultural one. It is not just a matter of technical deployment, but a series of trade-offs: between real and imagined uses, between “augmentation” of humans and automation, between local adoption and global alignment. Failures in adoption are not uncommon, and some would even say they are the norm, precisely because the necessary work of acculturation, organizational design, and governance is underestimated.
Artificial intelligence is no exception to this logic and I would even say that it exacerbates all its shortcomings. It combines the hype surrounding general AI, super-intelligence, and mass automation with the more mundane realities of partial, sometimes invisible uses. AI is moving from its status as a promise to become a functional infrastructure: it makes recommendations, optimizes supply chains, assists lawyers, analyzes massive volumes of data, and supports writing, creation, and translation.
However, it is precisely in this phase of commoditization that the risk of blindness is greatest. Once AI is working, i.e., once it is sufficiently fluid, reliable, and present in our routines that it is no longer discussed as a technology, it becomes a kind of implicit norm, and this normalization brings with it another difficulty: that of continuing to think about it, criticize it, and adjust it. What is no longer visible is no longer regulated.
This is where Nicholas Negroponte’s quote, “Computing is not about computers anymore. It’s about living” takes on its full meaning. It reminds us that tools are never neutral. When they become invisible in everyday life, their power to influence grows all the more because it is silent. The question is therefore not whether to accept or reject innovation, but whether we remain capable of choosing the forms of technological life we deem desirable.
Bottom line
This quote from Douglas Adams should make us reconsider how we name, think about, and integrate technologies. By labeling as “technological” anything that does not yet work or is not fully integrated into our daily lives, it reveals our ambivalence towards innovation: attraction to novelty, but difficulty in fully absorbing it; a desire for transformation, but fear of the instability associated with change; a value placed on progress, but a lack of interest in its systemic effects.
What we call “technology” is often what we have not yet domesticated, and what we no longer name is sometimes because we have stopped questioning it. But in the context of the rapid spread of AI, this gradual invisibility of tools is a real issue. It can lead to a form of autopilot in organizations, with choices driven by the capabilities of the tools rather than by reflection on their purpose.
Thinking about technology does not mean constantly anticipating it or passively consuming it, but rather maintaining, throughout its life cycle, the ability to ask certain questions: What is it really for? What trade-offs does it require? How does it transform the way we work, think, and even live together?
Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)