
AI
Automated, Not Autonomous: Kuvi CTO Jay Nasr on Why Agents Aren't Ready

In this interview, Kuvi CTO Jahāngir "Jay" Nasr discusses the state of AI agents heading into 2026 — what's holding them back, how organizations should adjust expectations, and why the industry keeps confusing automation with autonomy.
What hurdles are keeping agents from being rolled out in a widespread manner?
Excluding regulatory complexities, agents are already mostly in the wild. What's technically holding them back is threefold. First, reliability: people have traditionally trusted computers for perfect accuracy, whereas LLMs are inherently non-deterministic and prone to hallucinations. The second hurdle is scalability — it's one thing for an agent to reliably serve ten people, another to scale to ten thousand. Third and finally, agents are explored as "thinking" and "reasoning" actors, when their real strength lies in being statistical machines.
What do organizations need to do to fix the roadblocks?
Organizations should shift from a prediction mindset to building robust systems — a crucial distinction. Robust systems tolerate errors with built-in fail-safes, while predictive ones preempt them. Another key is adjusting expectations: we dismiss 60% gains chasing "all-or-nothing" perfectionism, expecting agents as flawless as chess AI. Yet that's simply not the case — logically so, as the problems agents solve are multidimensional, orders of magnitude more complex. In that regard, it's akin to No-Limit poker, which hasn't been (and possibly can't be) solved, its "no-limit" nature infinitely more intricate.
Do you see agents going mainstream in 2026, or are they not quite ready yet?
Historically — and minding nonlinearity — 2026 will build immense momentum but almost certainly won't be the mainstream year for agents. As 2025 hype waned, many companies gave up and pivoted, learning that wrapping simple requirements around an LLM isn't going to cut it. Meanwhile, Large Language Models somewhat plateaued, though progress keeps going. The tech is still very much nascent; therefore, altogether, extrapolating the current rate, 2030 feels more like the year agents truly go mainstream.
Do CIOs trust agents to make autonomous decisions? If agents aren't making autonomous decisions, what value do they have?
The biggest confusion in this space is mistaking automation for autonomy. We're nowhere near truly autonomous agents, despite some narratives. What we do have today are highly capable automated, semi-supervised systems that can execute well-defined intents with speed, consistency, and guardrails — and that by itself is enormously valuable.
One can see the mirage clearly in Marc Andreessen's "AI employees" commentary or aixbt-style fully autonomous trading personas. They're compelling demos, but not systems CIOs would entrust with real capital. Institutions don't want black-box autonomy; they want deterministic execution with human-defined intent and oversight. That's how every mission-critical system works today: Autopilot doesn't replace the pilot. Algorithmic trading doesn't eliminate portfolio managers. CI/CD doesn't remove engineers. Instead, automation compresses time, reduces errors, scales judgment within constraints.
Once expectations shift from "agents that think for you" to agentic frameworks that execute for you in fail-safe, inspectable environments, a new dimension of productivity opens up.
CIOs don't need agents to decide what to do — they need systems that reliably carry out decisions, test strategies, monitor conditions, and act instantly when thresholds are met. That's where the real value lies: not in autonomy, but leverage over complexity.
Share Blog
Related blogs
Community









