Augmentation Model

The Blueprint: How Markel Insurance Got 113% More Productive Without Losing a Single Underwriter

A case study in understanding which parts of judgment work actually require human judgment, and which parts don't.

Insurance underwriting is judgment work masquerading as data work

Insurance underwriting is judgment-intensive. Every quote requires someone to assess risk, price exposure, and decide coverage terms. That's real judgment. But here's the trap: much of an underwriter's day isn't judgment at all.

Markel's underwriters were spending 30% of their time on low-skill, low-value tasks. Rekeying data between systems. Pulling information from PDFs and emails. Standardizing submission formats. This wasn't judgment work. It was data handling.

The result was predictable. Quote turnaround averaged one day. Brokers waited. Deals stalled. Volume was capped by headcount, and headcount was constrained by the need to staff all those rekeying hours.

Markel needed to grow volume without proportionally growing underwriter headcount. The easy answer was tempting: automate the underwriters. Replace them with AI. But Markel asked a smarter question instead: which parts of underwriting are actually judgment, and which parts are just data handling?

Augment the judgment, automate the data work

In 2021, Markel partnered with Cytora, an insurance AI platform built on this principle: don't replace underwriters. Give them better material to work with.

Cytora's job is to handle everything that isn't judgment. The platform digitizes incoming submissions from PDFs, emails, and forms. It standardizes unstructured data. It enriches applications with 12 different data sources: past loss history, public records, property data, financial statements, industry benchmarks, and more. It performs upfront triage and risk classification. It routes decision-ready packages to the right specialists.

What do humans keep? Everything that matters. The architecture maps to three layers of judgment, and each layer is treated differently.

Visible Layer
Pattern matching from enriched data. "Is this a standard risk?" The AI answers this using data sources and historical patterns. Low stakes if wrong. Easily corrected by a human eye. AI owns this layer.
Contextual Layer
Unusual risk assessment and pricing. "Should we accept this? What terms?" The human answers this. But the AI has prepared the ground. The underwriter gets enriched data, risk classification, and recommendations from the visible layer. The AI makes the contextual judgment easier, not easier to avoid. Humans own this layer.
Invisible Layer
Broker relationships, institutional knowledge, cultural memory. What classes of business perform well over time? How does this opportunity fit into our portfolio? Who needs mentoring? These are pure domain expertise and organizational memory. AI never touches this layer.

The handoff between layers is explicit. AI prepares the visible layer. Humans decide the contextual layer. Invisible layers stay invisible. Everyone knows their job.

113% productivity increase with no underwriter replacement

113%
Gross Written Premium per Full-Time Equivalent
12x
Faster quote turnaround (24h to 2h)

The numbers are verified. Insurance Business UK reported the 113% productivity increase in GWP per FTE. Reinsurance News confirmed the same metric. Cytora published a full case study. The improvement is real.

What happened beneath the numbers: 30% of underwriter time was freed from low-value data tasks. Quote turnaround dropped from 24 hours to 2 hours. Fewer input errors because data standardization happens once, correctly. Complex cases were routed to senior underwriters immediately instead of sitting in generic queues. Growth happened without proportional headcount growth.

And critically: zero underwriters were eliminated. In September 2025, Applied Systems acquired Cytora. Markel deepened the partnership rather than winding it down. The architecture worked. It stayed.

How Markel passed every architectural test

The Judgment Architecture defines three tests for responsible AI deployment in judgment work. Markel's implementation passed all three.

Values Alignment
Underwriter expertise is valued, not threatened. Humans are still the decision-makers. AI is explicitly positioned as preparation, not replacement. The invisible layers (institutional knowledge, mentoring, relationships) are untouched. Underwriters can see themselves in the future architecture.
Liability Exposure
Humans make all final decisions and own the risk. The AI handles the visible layer (pattern matching, data work) where mistakes are low-stakes and reversible. Contextual decisions about unusual risks, pricing, and coverage terms stay with underwriters. The liability chain is clear.
Escalation Path
Complex cases are always routed to senior underwriters. The system is tuned to flag uncertainty, not hide it. When the AI is unsure, the case goes to the person with the most experience. No judgment bottleneck. No AI guess replacing human expertise.

Markel didn't just build AI into insurance. They understood which walls were load-bearing and which were filler.

This is bigger than insurance

Markel's architecture isn't unique to insurance. The same pattern appears in radiology (AI flags anomalies, radiologists decide complexity), law (AI retrieves case law, lawyers decide strategy), finance (AI identifies outliers, analysts decide valuation), and manufacturing (AI detects defects, engineers decide tolerance).

The pattern is simple and repeatable. Automate the visible layer. Augment the contextual layer. Protect the invisible layer. Keep humans in the critical loop. The formula works because it respects the structure of judgment work instead of pretending judgment doesn't matter.

The building stays standing because someone took the time to understand which walls were load-bearing.

Most AI deployments fail because they treat judgment as a problem to eliminate instead of a layer to preserve. They try to automate judgment itself. They push AI further than safety, expertise, or values can support. Then they wonder why adoption stalls, why trust breaks, why the best people leave.

Markel won because they refused to pretend. They asked what was actually judgment. They built AI to handle what wasn't. They left judgment to the people who do it best. The productivity didn't come from replacing underwriters. It came from letting them do their actual job.

Explore the Framework

The Judgment Architecture is a model for understanding where AI succeeds, where it fails, and where it must never go. Learn how to apply it to your organization.