← Back

The thesis

The bottleneck isn't AI capability

Here's what I figured out after 359,000 messages with AI over three years.

"The bottleneck isn't AI capability anymore. It's human reception. Somewhere between GPT-3.5 and Claude 3, something shifted. AI capability stopped being the constraint. The new bottleneck: Can humans understand enough to decide with confidence?"

— 2026-01 | Reddit post | r/artificial

That's the thesis. Let me unpack it.

The entire AI industry is racing to build smarter models. More parameters. Better benchmarks. Faster inference. And they're missing the point.

"100% human orchestrating of AI — I call it 'AI in the loop' as a contrarian to the stupid 2025 'human in the loop.'"

— 2025-12 | claude-code

AI in the loop. Not human in the loop. The human is the orchestrator. The AI is the instrument. Flip the frame.

"The bottleneck is the amplifier. A person's understanding of the situation is the limiter AND the amplifier. AI has to amplify them. But their bottleneck is how AI can teach them."

— 2025-09 | SHELET development | claude-code

Your limitation is your amplifier. The constraint is the leverage point. Don't eliminate it. Design around it.


The formula

Output Value = min(Capability, Understanding)

Expanding AI capability past human comprehension adds zero value. The only lever that matters is human understanding.

"The whole problem to solve is the human sovereignty bottleneck. To enable humans to conduct their AI orchestration we need to make human-AI translation as efficient as possible. That's the only thing we're doing."

— 2025-11 | Thesis development | claude-code

That's the only thing we're doing. Not building smarter AI. Not automating humans away. Making the translation layer work.