Most operators run modern agents on chatbot software. The bottleneck is in the user.
A framework for diagnosing where AI fluency breaks down, and what to fix at each level.
The category error
Claude Code is not ChatGPT with files.
It's an agent. It reads a codebase. It plans across multiple files. It executes changes. It iterates on failures. The execution loop runs independently of the operator who started it.
Operators bringing chatbot habits to an agent pay for it. In plan limits. In frustration. In quiet abandonment.
Long meandering threads. Vague sweeping questions. Mixed topics in a single session. The behaviors are invisible to the operator and obvious to the system.
Anthropic's own support documentation names the primary cause of plan-max issues: long sessions that were never cleared. The single most effective lever is a command most operators don't know exists.
The fix is not more tokens. It's a different mental model.
Three levels of operator fluency
AI is not a tool you adopt. It's a language you learn to speak.
Some operators travel through it with a phrasebook. Some build inside it. Some conduct in it.
The progression is real. Observable. The gap between levels is the difference between abandoning the technology and shipping with it.
Tourist
The Tourist has it installed. They use it like ChatGPT. They mix topics in a single session. They default to the most expensive model on every task. They have no project memory file.
They hit limits weekly. They conclude the technology is overhyped.
What they ship: Almost nothing they couldn't have produced in a chat window.
What it costs: The plan fee. Hours of frustration. A creeping conviction that AI can't do what the headlines claim.
Builder
The Builder has a project memory file. They plan before they execute. They switch models based on the task. They clear context between unrelated work.
They hit limits during real sprints. Not on a Tuesday morning.
They're building things they couldn't previously build alone.
What they ship: Internal tools. Prototypes. Real automations. Working software.
What it costs: An afternoon learning the patterns. Returns the investment in days.
Orchestrator
The Orchestrator runs multiple agents in parallel. Custom slash commands. Specialized skills. Subagents handling discrete components. Verification loops where the agent tests its own output before reporting back.
The output capacity of a small team. Operated by one person.
Boris Cherny, who built Claude Code at Anthropic, runs five Claude Code instances simultaneously across numbered terminal tabs. This is not a stunt. It's what native fluency looks like.
Orchestrator has two phases. Phase one is personal mastery. Phase two is scaling that mastery to the operators around you. Personal leverage compounds in months. Organizational leverage compounds in quarters.
What they ship: Systems. Not features. Then teams.
What it costs: Real practice. Months, not afternoons. Reachable by anyone willing to climb the ladder one rung at a time.
Which level are you?The Operator Fluency Assessment maps your current workflow against the three levels and identifies the specific patterns blocking your next rung. Three minutes.
Take the assessment →Five token sinks that keep Tourists stuck
Climbing from Tourist to Builder is not about willpower or talent. It's about replacing five specific habits.
Each one is documented. Each one has a fix that takes minutes to learn.
01The forever session
The operator opens Claude Code on Monday and is still in the same session on Friday. Every new question forces the agent to re-read every previous message. The cost compounds invisibly until the limit hits.
/clear command resets context without losing project memory. Anthropic identifies this as the single most effective lever for both quality and cost.02Opus on everything
Operators leave the most powerful model set as default and never change it. Opus costs roughly five times what Sonnet does in plan tokens. For most tasks, the quality difference is invisible.
opusplan mode uses Opus to plan and Sonnet to execute, capturing both benefits.03The vague sweep
"Analyze my repo and tell me what is wrong" forces the agent to grep dozens of files, read them partially, and produce a long summary. One of the most expensive operations an operator can request. The same goal scoped precisely consumes a third of the tokens.
04The blind start
Without a project memory file, every session begins with the operator re-explaining their stack, their preferences, their conventions, and their last session's output. Twenty minutes of context-loading before any real work happens.
05The mixed chat
The operator asks for help with a marketing email. Then a pricing analysis. Then debugging a script. All in the same session. Every old message is dead weight that gets re-processed on every new turn.
The plan-max paradox
Executives measuring AI adoption look at consumption. Seats deployed. Tokens spent. Plan caps hit.
The implicit logic: more usage equals more adoption equals more value.
The data inverts the assumption.
The operators hitting plan-max are most often Tourists thrashing through limits with vague prompts and uncleared sessions. The operators producing the highest output are Builders and Orchestrators using a fraction of the tokens with disciplined patterns.
Same logic applies at the organizational level. The AI Adoption Ladder has four rungs: Access, Activity, Integration, Leverage. Most companies measure Access and call it adoption. The companies winning with AI measure Leverage.
The framework is forthcoming as a separate piece of work. For now, the principle is the same.
Stop measuring consumption. Start measuring fluency.
Applied analysis
The framework's value is predictive. These cases show what each level actually looks like in operation.
The $1,389 month
A documented case of a Claude Code user running the same workflow as a Builder. Four invisible patterns produced the cost difference.
Long uncleared sessions. Default Opus on every call. No prompt cache management. Sub-agents spawned without strategic context, returning useless data and forcing re-spawns.
Same workflow. Same goals. Different fluency.
Cost: $1,389/month. After fixing the four patterns: $200/month.
The factor was not the tool. It was the operator.
Vulcan Technologies
Two co-founders. Neither with an engineering background. The CEO's last programming experience was an AP JavaScript class in high school where he wrote code with pen and paper.
They founded Vulcan in April 2025 to tackle regulatory code complexity. By May 1st they had a prototype built for Virginia's governor's office. They won the contract over established consulting firms.
They didn't have a technical advantage. They had Claude Code, treated as an agent stack rather than a chatbot.
That's the entire difference.