March 28, 2026 // Philosophy

Why Semi-Autonomous, Not Fully Autonomous

There is a race happening right now in AI and everyone seems to be running in the same direction. Fully autonomous. No humans in the loop. Set it and forget it. The pitch is always the same: our AI handles everything so you don't have to think about it.

We are building the opposite of that on purpose.

The Autonomy Trap

Full autonomy sounds good in a demo. An AI that detects a threat, contains it, remediates it, and writes the post-mortem all by itself. You come in Monday morning and everything is handled. Beautiful.

Now think about what happens when it gets something wrong. And it will get something wrong because every system does eventually. A fully autonomous agent that misidentifies legitimate traffic as a DDoS and takes down your production load balancer at 2am on a Saturday. An infrastructure agent that decides a degraded node needs to be terminated and takes out the one running your primary database. No human saw it happen. No human approved it. You find out when customers start calling.

The failure mode of full autonomy is not that the AI does nothing. The failure mode is that it does something confidently and irreversibly while nobody is watching.

The Spectrum Nobody Talks About

The industry talks about autonomy like it is binary. Either your AI can act on its own or it is just a glorified dashboard. That framing is wrong and it is wrong in a way that sells product.

In reality there is a spectrum. On one end you have alerting, which is what most "AI security" products actually are once you look past the marketing. They watch things and send you notifications. On the other end you have full autonomy where the AI decides and acts without asking. Both ends of this spectrum have obvious problems. Alerting alone generates noise that humans learn to ignore. Full autonomy removes the judgment that keeps bad decisions from becoming disasters.

The interesting space is in the middle, and almost nobody is building there because it is harder to sell. "Our AI does most things but asks you about the important ones" does not make a good pitch deck slide. But it is how you actually want critical systems to work.

What Semi-Autonomous Looks Like in Practice

Our agents run with real access to real infrastructure. They have SSH keys, cloud credentials, and the ability to execute changes in production. They are not sandboxed demos.

But they operate on a tiered authority model. Routine operations happen automatically. Restarting a service that crashed, scaling a fleet in response to load, rotating credentials on schedule. These are well-understood actions with predictable outcomes and low blast radius. The agent handles them, logs what it did and why, and moves on.

Then there are actions that require human approval. Terminating infrastructure, modifying security groups, changing access controls, anything that could cascade. The agent identifies what needs to happen, explains its reasoning, and waits. A human reviews it, approves or rejects it, and the agent executes or adjusts.

The key insight is that most of the value comes from the routine automation anyway. The 3am alerts, the repetitive maintenance, the things that burn out operations teams. You do not need full autonomy to eliminate those. You need an agent that is competent enough to handle the boring stuff and smart enough to know when something is not boring.

Trust is Earned, Not Declared

Every fully autonomous AI product is essentially asking you to trust it completely on day one. Plug it in, give it access, and let it run. That is not how trust works in any other context. You would not give a new hire root access to production on their first day and tell them to do whatever they think is best.

Semi-autonomous systems earn trust over time. You start with a narrow set of automated actions and a wide approval requirement. As the agent proves itself and as your team builds confidence in its judgment, you widen the automated scope. The human approval boundary moves, but it never disappears entirely. There is always a class of action that requires a person to say yes.

This is not a limitation we are trying to engineer away. It is the design. The human in the loop is not a temporary crutch until the AI gets good enough. It is a permanent feature because some decisions should always have a human accountable for them.

The Economics Argument

Here is the part that usually gets lost in the philosophy. Semi-autonomous AI changes the economics of operations in a way that full autonomy does not, because full autonomy requires a level of reliability that does not exist yet.

A fully autonomous system needs to be right essentially 100% of the time because there is nobody checking its work. That level of reliability is extraordinarily expensive to build and to verify. You need massive testing infrastructure, formal verification, extensive red teaming, and insurance against the failures that will still happen. The cost structure of true full autonomy is much higher than the marketing suggests.

Semi-autonomous systems need to be right most of the time, and they need to know when they are not sure. That is a dramatically easier bar to clear. The human approval layer is not just a safety feature. It is an economic one. It lets you deploy capable AI agents today at a cost that makes sense today, without waiting for the hypothetical future where AI never makes mistakes.

A small team running blockchain infrastructure with 14,000 nodes cannot afford a 20-person operations team. But they also cannot afford an AI that confidently takes down a masternode because it misread a health check. What they can afford is an AI that handles the routine monitoring and maintenance for all 14,000 nodes and escalates the five genuinely weird things that happen each week to a human who knows the system.

Why This Matters for Security

Security is where the autonomy question gets really sharp. An AI agent with security responsibilities has access to your most sensitive systems by definition. It sees your credentials, your network topology, your vulnerabilities. Giving that agent full autonomy means trusting that it will never be tricked, never be compromised, and never make a judgment call you would disagree with.

We wrote about this in our first post. We red teamed our own AI agents with social engineering attacks. They held up well, but we found gaps. Small ones, the kind that you only find by testing. Now imagine a fully autonomous security agent with one of those small gaps, running unsupervised against a motivated attacker who has all the time in the world to find it.

Semi-autonomous security means the agent watches everything, analyzes everything, and responds to routine threats automatically. But when it encounters something unusual or wants to take a high-impact defensive action, it checks with a human first. The attacker now has to fool both the AI and a person. That is a fundamentally harder problem than fooling either one alone.

The Honest Position

We could build fully autonomous agents. The technology is not the constraint. The constraint is that we think it is the wrong choice for the systems we are responsible for. Infrastructure and security are domains where the cost of a bad autonomous decision can be catastrophic, and where the benefit of keeping a human in the loop far outweighs the cost of the few minutes it takes them to review and approve.

The AI industry has a financial incentive to push toward full autonomy because it sounds more impressive and it is easier to charge for. "We replaced your entire ops team" is a better sales pitch than "we made your ops team ten times more effective." But the second one is more honest, more useful, and more resilient.

We chose a side. Semi-autonomous by design, not as a stepping stone to full autonomy, but as the right answer for systems that matter.

// Get in touch

hello@reikon.io