The Computer Finally Learned to Listen
There’s a bar in Compostela — I won’t say which one, because then everyone would go and ruin it — where the bartender has never once asked what I want. He sees me walk in, he pours. Fifteen years of this. He knows by the way I sit whether it’s a coffee day or a ribeiro day. He’s never wrong.
I mention this because I spent an hour last week arguing with my computer about where it put a file.
The file existed. I had written it. The computer knew this. But the computer wanted me to remember the exact path, the exact name, the exact spelling. The computer, you understand, has perfect memory and zero initiative. A clerk with infinite filing cabinets and no common sense.
“You know what I mean,” I said to the screen, the way you do.
It did not know. It cannot know. It was not built to know.
In 1984 — the year, not the book, though the book is relevant — Apple told us the future of computing was a desktop with folders. A metaphor. Your files would live in little pictures of folders, and you would organise them, and everything would be intuitive.
And it was. For about ten years.
Then we got more files. Then we got email. Then we got the web, and tabs, and apps, and notifications, and cloud storage, and suddenly the desktop metaphor was a cruel joke. Nobody’s desk has 47,000 items on it. Nobody’s desk sends you messages while you sleep.
The metaphor broke. But we kept using it. We’re still using it. Forty years of dragging little pictures of folders around, pretending this is normal, pretending our grandchildren will do the same.
They won’t.
I have been using something called OpenClaw (I know you too).
I’m going to try to explain what it does, but the explanation will sound either too simple or too strange, depending on who you are. Here it is: you tell the computer what you want, in words, and it does it.
Not “type this command.” Not “click here, then here, then here.” Not “learn this application’s seventeen menus.” You say what you want. The computer figures out how.
Find that article I was reading last week about Portuguese trains.
Send this document to my editor.
What meetings do I have tomorrow?
Make the text bigger. No, all the text. Everywhere.
And it does it. Not because it has been programmed with every possible request — that would be impossible — but because it understands the request and knows how to use the tools.
This sounds like science fiction, I know. It also sounds like those voice assistants that have been disappointing us for a decade. But there’s a difference, and the difference matters.
The voice assistants were party tricks. They could answer questions. They could set timers. They could play music, if you asked in exactly the right way. But they couldn’t do anything real, because they had no hands. They were heads in jars.
This is different. This has hands.
A friend of mine — a carpenter, proper old-school, makes furniture that will outlast us all — once told me about his apprenticeship. The master would say: “Make me a table.” Not: “Take the oak plank from the third shelf and cut it to 180 centimetres using the table saw with the blade set to…” Just: “Make me a table.”
The apprentice had to figure out the rest. That was the point. The master’s job was to describe outcomes. The apprentice’s job was to understand the craft well enough to achieve them.
We have been the apprentice for forty years. The computer has been the master. It tells us exactly which button to press, and we press it. We translate our intentions into its language, every time, thousands of times a day.
This is backwards. This has always been backwards. We just didn’t have any other option.
Now we might.
I call it an agentic operating system, which is an ugly phrase, but I haven’t found a better one. The idea is simple: the operating system stops being a filing cabinet with buttons and starts being a capable assistant. You describe what you want. It acts.
Not an assistant you have to micromanage. Not an assistant that interrupts you with questions every thirty seconds. An assistant like that bartender in Compostela: one that has been paying attention, that knows the context, that does the right thing without making you explain everything from first principles.
This is not about artificial intelligence, or not only. It’s about architecture. About deciding that the human describes outcomes and the machine handles implementation. About accepting that this is how it should have worked all along.
Now. Here is where I get nervous.
Because this future is coming whether we build it or not. The large technology companies see it too. Apple is working on it. Microsoft is working on it. Google is certainly working on it. They have more engineers than some countries have citizens.
And they will build it their way.
Their way means: locked to their hardware. Their way means: your conversations with your computer stored on their servers, forever, for their benefit. Their way means: the assistant works brilliantly, as long as you stay inside their garden and pay their subscription and accept their terms.
We have seen this before. We are living in it now.
The personal computer was supposed to be personal. That was the revolution. Before the PC, computers belonged to institutions — universities, corporations, governments. The PC put the machine in your home, under your control.
And for a while, that was true. You owned the hardware. You owned the software. You could open it up, see how it worked, modify it if you knew how. The computer was yours.
Then, slowly, the ownership slipped away.
The software moved to subscriptions. The files moved to the cloud. The operating system started requiring an account, then an internet connection, then your biometric data. The personal computer became a personal terminal to someone else’s computer. You were renting your own desk.
Free software — what Richard Stallman started in the 1980s, what Linus Torvalds made practical in the 1990s — was the resistance. GNU, Linux, the whole ecosystem of tools that belong to everyone and no one. It preserved the idea that a computer could be truly yours.
Millions of us use it. Servers run on it. Phones run on it (though you wouldn’t know it from how locked down they are). It works. It matters.
And now we need it again, for the next shift.
Because if the agentic operating system only exists in proprietary form — if the only way to have a computer that listens is to let Apple or Microsoft or Google mediate every interaction — then we have lost something that will be very hard to get back.
Not just privacy, though that too. Not just ownership, though that too. Something more fundamental: the ability to have a relationship with your own tools that isn’t supervised by a corporation with its own interests.
The bartender in Compostela doesn’t report my drinking habits to anyone. He doesn’t suggest I might enjoy a Coca-Cola instead. He doesn’t pause, mid-pour, to show me an advertisement. The relationship is simple: I am the customer, he is the bartender, the drink is good.
I want that with my computer. I suspect I am not alone.
So here is my request, to anyone who builds things, to the communities that have kept free software alive for forty years:
Build this. Build the agentic layer that can run on GNU/Linux, on BSD, on any system that respects its users. Build it in the open, so we can see how it works. Build it so the conversations stay on our machines, so the assistant serves us and not some distant data centre.
It doesn’t have to be perfect. It doesn’t have to compete with what Apple will ship in three years with a billion-dollar marketing campaign. It has to exist. It has to be an option. It has to keep the door open.
OpenClaw is a start. There are others. The architecture is possible — I know, because I am using it, right now, to write this, in ways that would have seemed like magic five years ago.
The future where computers finally listen is coming. The question is whether we’ll own that future, or rent it.
I know which one I’d prefer.
The computer still can’t pour a ribeiro. But it’s learning to pay attention. That’s something.