AI can do everything—and that’s exactly the problem.
Lately, I wake up in the morning and every idea I have can take shape within a few hours… The rarest skill is no longer technical or strategic. It has become the ability to say no.
This week, I spent a large part of my time on a single project: Qapten.
Qapten is a personal AI assistant I launched two months ago. It’s a concrete example of what we now call an AI agent. We’ve heard a lot about chatbots—those assistants that answer your questions. Agents are the next generation. They don’t just respond: they act. They connect to your tools, make decisions, execute tasks.
Qapten is a personal AI agent based on OpenClaw technology. Unlike ChatGPT or Claude, it learns as you use it. It can perform tasks even when you’re not in front of it, because you’ve programmed it to do so. Each user has their own independent agent, with its own security rules. An assistant that supports you daily in your work, available at assistant.qapten.com. The service is currently in beta, with around a hundred users already onboard.
On a related note: “Leading in the Age of AI Agents” will be the theme of the next Club TEDxParis dinner that I’ll be hosting with Antoine Bueno, essayist and futurist, and Mehdi El Azhari, tech entrepreneur. About fifteen of us, along with the Planview teams, will be gathered to discuss the promises and limits of AI agents.
Over the past two months, with this new Qapten adventure, I’ve experienced something I hadn’t anticipated: every evolution of the tool calls for another. Every fix, every improvement generates three new ideas for improvement. And given the speed at which my agents and I develop and maintain this platform, these ideas come to life within hours. It’s a bit like repainting a room and realizing the ceiling needs a coat too—and then the floor, and then the house next door…
Yesterday, I even found myself imagining connecting my assistant to augmented reality glasses (you know, the Even Realities ones I’ve mentioned before), the kind that display text directly in your field of vision. The idea is simple: you ask your agent a question out loud, and the answer appears on the lenses, right before your eyes, in real time. Technically, it’s feasible. The idea is there, it’s tempting. And that’s exactly the problem.
Because the real question today is no longer “is it possible?” It’s “is it the right thing to do?”
I’m in Vancouver this week for my seventeenth TED conference—the last one here, actually, since everything will move to San Diego next year, bringing more than a decade in this city to an end. I ran into Peter Steinberger, founder of OpenClaw, the infrastructure Qapten is built on. I pitched him the project, and the first thing he told me wasn’t “great, go for it.” It was: put guardrails in place.
When he talks about guardrails, he’s obviously referring to access security for tools. On that front, the assistant is designed to always guide the user toward secure connections to their everyday tools. It comes with no external connections by default. Qapten doesn’t just answer questions—it acts. And that changes everything. Because an agent that acts without limits is an agent that can go off the rails. Hence the importance of guardrails. In Qapten, you decide—application by application—what it can access and how.
But Peter’s point goes deeper than that. He’s right in another sense: the power of a tool is no longer measured by what it enables you to do. It’s measured by what you choose not to let it do. And that now applies to our own brains as well…
The spinning platform
I realized something: I put guardrails into the product. But I didn’t put them into my own mind…
Why? Over the past few weeks, I’ve felt like I was standing on a spinning platform turning at full speed. You may know the feeling. We’re living in a time when every idea can take shape within hours. In the morning, ten ideas. Before lunch, three are feasible. By evening, one is already in production. And every success calls for another; every open door reveals five more.
We multiply projects. We seize opportunities. It feels like we’re running. But in reality, we’re no longer choosing.
The only resource that doesn’t regenerate isn’t technology. It isn’t capital. It’s available brain time. In an age where everything is possible, the rarest—and perhaps most valuable—skill is the ability to judge what deserves your attention. And to say no to the rest.
Bartleby: “I would prefer not to…”
When faced with that kind of vertigo, I fall back on a habit I’ve cultivated for a long time. When a question lingers, when I can’t untangle what I’m feeling, I turn to a novel. A story, a piece of fiction that often provides answers.
And this week, I did something a bit ironic: I asked my Qapten. I said, “I’m on this platform spinning at full speed—recommend a book that helps me make sense of it.” It suggested three. One of them I didn’t know—its favorite—by Herman Melville: Bartleby.
A New York clerk in 1853. Efficient, conscientious. And then one day, without drama, without crisis, without any grand announcement, he simply stops responding to requests. Faced with each new task, he repeats the same phrase, unfailingly: “I would prefer not to.”
Just knowing how to say… no.
Melville understood nearly two centuries ago what we still struggle to admit today—especially now that AI makes everything possible: freedom isn’t having every door open.
It’s knowing which ones you will choose not to walk through.

