Jacob Sandry at Euclid Power: AI in High-Stakes Industries
Why AI's 99% isn't good enough for high-consequence workflows — and how to solve for it
This week on Verticals:
Why 99% accuracy is the same as wrong in high-stakes industries
The “exoskeleton” model — AI suggests, humans stamp, clients trust
Selling into CapEx budgets and why it changes everything
Is regulation actually a moat?
Monetizing via share of upside vs. work-to-be-done
Check out the full episode on YouTube.
There’s a popular assumption circulating in AI right now: that the remaining accuracy gap between foundation models and human experts is closing so fast that it barely matters. In consumer or SMB applications, that may be true. But in verticals where a single error can kill a $100M deal (or worse) — energy, infrastructure, defense structured finance — 99% accuracy is functionally identical to wrong. The last 1% is the entire business.
Jacob Sandry knows the cost of that last mile personally. Before co-founding Euclid Power, he helped build Goldman Sachs’ renewables investment fund from zero to $3B deployed. At 26, he was walking into Goldman’s investment committee recommending $500M commitments on solar portfolios. “If I had a mistake in my model or my presentation and said, ‘sorry, Claude did that’ — immediately fired,” he told us. There is no margin for error when you are underwriting 6% unlevered returns with no recourse — meaning the trust threshold for AI is extremely high.
Today on Verticals, Jacob shares how he’s solving that problem for his customers as CEO of Euclid Power through a mix of AI, services, expert approvals, and outcome guarantees — and how why earns them significantly more pricing power.
This episode is brought to you by Parafin: the embedded capital infrastructure behind Amazon, DoorDash, Gusto and many more. If you have SMB customers, learn more about how Parafin can grow your retention, revenue, and TAM today — click here or below.
The 1% problem is an operating reality for a growing type of Vertical AI startup building in what we’d call high-consequence workflows: energy project finance, clinical decision support, construction compliance, legal diligence. In these markets, the buyer doesn’t just want speed — they want someone accountable for the answer. And 100% accountability is something an LLM structurally cannot provide today. Training data lags, hallucinations, inconsistencies can all result in an imperfect outcome that is effectively worthless — if a customer has to review results for even a small mistake, they might as well handle it in-house.
Euclid Power’s response is what Sandry calls an exoskeleton model. AI handles the extraction, pattern-matching, and first-pass analysis across thousands of documents in a renewable energy data room — interconnection studies, land agreements, environmental reports, engineering specs. But a domain expert reviews every output, validates the conclusions, and stamps the deliverable. The client gets a diligence report they can take to their investment committee, not a chat interface they have to babysit.
Euclid also just acquired Thresh, an AI-native competitor whose pitch was compelling: go from data room to IC memo automatically. The technology was strong. Customers liked it. But the market was limited because when the AI still made mistakes, users had to redo the work anyway. “There’s some sort of cap on the market for it,” Sandry said. “You’d be like, well, this still gets things wrong. So then I still have to go back and do the work.” The AI-only product hit a ceiling not of capability, but of trust.
This has implications well beyond energy. As we explored in Dude, Where’s My Moat?, brand and trust are among the most durable moats at scale for vertical AI companies. In high-stakes verticals, trust isn’t a nice-to-have bolted on after product-market fit — it’s the product itself. The companies that figure out how to deliver AI-powered speed with human-grade accountability have an opportunity to play a different role with customers. Instead of being an OpEx line item, you can earn the economics of advisors and consultants who already charge on percentages of deals or longer-term profits. In industries like renewables, half of project spend goes to admin, legal, and compliance — but the big prize still sits in the long-term appreciation of a project. The potential is an order of magnitude in relative ACV.
The AI services debate often gets framed as a transitional phase — a necessary evil until models get good enough. Jacob’s experience suggests it may be more of a long-term feature than a bug. In verticals where the last 1% is everything, the human in the loop might turn out to be a moat. Watch the full episode to hear why.
Subscribe to Verticals to get new episodes every week, available wherever you watch or listen.


