What Remains When the Machine Does (Almost) Everything?
In 2022, experimenting with generative AI still felt like a curious adventure. Just three years later, it has become an invisible backbone of professional life. This rapid, almost imperceptible shift
We are witnessing a silent but radical revolution. The early experiments with GPT-3 hinted at fascinating possibilities, though they were still hesitant and fragmented. Today, AI has become a true work companion. Every day, I rely on a dozen tools—Gemini Pro, ChatGPT Pro, Midjourney, ElevenLabs, HeyGen, Submagic, n8n, Replit, Cursor, Notion AI, Vectorize. Some tools have fallen by the wayside—Perplexity, Copilot, Runway—while others have become indispensable. My monthly spend approaches €1,000, yet the productivity gain is undeniable. Compared to 2022, I estimate my output has tripled.
This adoption of AI has followed a clear path through four phases, a journey I outlined two weeks ago. It begins with official endorsement—AI is deployed within well-defined guidelines. It evolves as AI is integrated into an organization's memory: it becomes the interface through which one accesses data. The next stage is process automation, where teams gain the autonomy to turn their repetitive tasks into intelligent agents. Finally comes symbiosis: AI agents are triggered automatically by all sorts of external events, while humans oversee, correct, adapt, and monitor the system as a whole.
Today, my professional and personal practices fall somewhere between the third and fourth phases. AI has ceased to be a mere tool—it has become a collaborator.
But as we approach this fourth stage, we encounter a wall. And this wall is real, in several ways. It is technical, first. Today's language models are still prone to hallucinations. Autonomous agents that act in response to external triggers can, without strict oversight, set off cascades of unpredictable consequences. The recent case of Air Canada, forced to reimburse tickets because its chatbot had erroneously promised them, reminds us of these risks.
But it’s also a psychological wall. The idea of such deep automation forces us to question our place in tomorrow’s value chain. It slows down adoption. If machines are capable of managing tasks end-to-end, then what exactly is left for us?
This is where the essence of being human re-emerges.
What remains is intention—the desire, the need, the urge to bring something into existence. Then comes arbitration—the act of making choices that give form to intention: the strategy, the product, the method. And finally, responsibility—the duty to stand by what is produced, even by an agent, because we are its author, its monitor, its owner.
AI can produce, but it cannot want. It cannot choose. It cannot take responsibility. That is, and will continue to be, the unique domain of humans. That is where our value lies.
