The enterprise technology landscape is currently navigating a structural transformation that rivals the shift from on-premise servers to cloud computing. For nearly two decades, the Software-as-a-Service (SaaS) model — predicated on recurring revenue, seat-based licensing, and user engagement as a proxy for value — has served as the bedrock of the technology economy. It created trillions of dollars in market capitalization by enabling human productivity. Today, LLM-era AI — and the consequent rise of agentic workflows — might be dismantling the economic logic that has underpinned SaaS for decades.
Thanks for reading Euclid Insights! Subscribe for our free weekly analysis — stay ahead of the curve on Vertical AI.
The core driver is a rapid decline in the cost of intelligence—at least at a given level of intelligence (as we’ve written, ballooning reasoning complexity has meant token spend continues to grow, even as the cost of a token shrinks). This effect is unlikely to slow down due to fierce competition among well-funded labs (OpenAI, Google, Anthropic, etc.), not to mention major advances in hardware and software efficiency.
As with all economic-technological paradigm shifts, however, the ultimate flow of value remains uncertain. If agents replace traditionally manual services, one might think they would capture the spend that would’ve otherwise gone to the manual services. The scenario playing out, however, more closely models the commoditization of those services. In other words, services people would have paid $100k for just a few years ago will eventually fetch only a fraction of that amount, as agents reduce the cost of delivering them.
The result of cheaper AI is persistent deflationary pressure on Vertical AI offerings, which are predominantly attractive due to the consumer surplus enabled by LLMs. Pulling data out of documents? Answering inbound phone calls? Drafting perfunctory compliance reports? Products like these can be excellent wedges today when infrastructure and know-how are scarce, and adoption is low. Soon, they will be table stakes — as excess margin is competed away by several well-funded, credible, nicely growing startups in every category. Any startup that hasn’t developed a moat in the meantime will be a casualty.
Some believe that as the marginal cost of intelligence collapses toward zero, the key value proposition of enterprise technology will shift from providing tools that assist human labor to delivering outcomes that replace it. We believe this is certainly true: Vertical AI can handle more end-to-end workflows than Vertical SaaS alone. This warrants significantly more customer value and willingness to pay, tapping into significantly larger budgets. We disagree, however, with the now-prevalent view that service delivery — a customer relationship that mirrors an external vendor rather than an internal platform integral to the business — is the prevailing paradigm of AI-powered software.
Rising TAMs Don’t Lift All Startups
But isn’t this an unalloyed good, if plummeting intelligence costs ultimately greatly expand the TAM for AI services? In some ways, yes — the flaw is in thinking that this opportunity expansion will necessarily accrue to the same points in the value chain for providing those services. The impact of PCs & spreadsheets on the accounting industry is a great example. This research report from Morgan Stanley describes the impact:
As adoption of this technology grew rapidly throughout the 1980s, especially after the introduction of Microsoft Excel in 1987, we saw a reduction in the number of Americans working as bookkeepers and accounting/auditing clerks (from ~2 million in 1987 to just above 1.5 million by 2000) — but we also saw a significant increase in Americans employed as accountants/auditors (rising from ~1.3 million in 1987 to ~1.5 million by 2000) and management analysts & financial managers (from ~0.6 million in 1987 to ~1.5 million by 2000).
Spreadsheets didn’t just automate bookkeeping — they shifted value up the skill curve, from rote labor to higher-order analysis. Ride-hailing shows an even more dramatic version of this phenomenon: not just reallocation within the value chain, but wholesale elimination of entire intermediary layers. We think AI will do both — and Vertical AI companies need to understand which side of that shift they’re on.
To explore why, let’s broach a concept no tech-ecosystem article on AI is complete without: the Jevons Paradox. It states that, as technology increases the efficiency with which a resource is used (and lowers its cost), the total consumption of that resource increases rather than decreases. Uber is, of course, the canonical startup-land example.
The global taxi market (inclusive of all ride-hailing) grew from roughly $69B in 2019 to $271B in 2024. Pre-Uber, estimates of the global traditional taxi market were in the $30-50B range. So total spending on “getting a car to take you somewhere” has grown roughly 5-8x over 15 years, even as per-ride prices were cut by about half (although in the post-VC-subsidy era, costs have reverted by 10-20%).
As Uber grew the TAM, however, it transformed the flow of value in that market. Historically, revenue from ride-hailing was captured by a handful of stakeholders: owners (both owner-operators and regulatory-monopoly beneficiaries like NYC Medallion owners), brokers (taxi agencies, dispatchers, garages), and taxi drivers employed by such brokers. Almost all of these stakeholders were disrupted and cut out of the value chain, except for drivers, who were able to pivot to a different “employer,” albeit with much higher competition and greater fungibility today. Agency revenue was subsumed by Uber and Lyft. Medallions were crushed: after NYC’s peaked at ~$1m in 2013, they fell to <$100k today (though there are signs of limited recovery, largely thanks to government intervention).
So, yes — technology enabled an ancient, established, low-growth industry to grow by over 500% in a decade. That, in itself, is amazing and an enduring vote of confidence in our collective innovation economy. But in the case of ride-sharing — and in many other instances of market expansion through consumer surplus — growth was accompanied by a major shift in the flow of value, rather than a broad benefit for all existing market participants.
The same Jevons dynamic is now playing out in enterprise AI, and the parallels are familiar. The cost of a quantum of intelligence — holding model quality, context, and reasoning complexity steady — is dropping rapidly. The spend required to deliver GPT-3.5-level inference dropped by more than 280x between November 2022 and October 2024. That frontier-level cognition remains expensive is irrelevant — set real-world tasks either get done (or they don’t). But they are getting done. In 2023, using an LLM to read and categorize every single incoming email for a mid-sized company might have been cost-prohibitive — today, at ~$0.40 PMT (per million tokens), it’s a negligible expense. AI coding tools have contributed to a world in which 41% of code is now AI-generated or AI-assisted, further lowering barriers. The cost of producing a unit of “cognitive work” is collapsing at a rate that makes Uber’s per-ride savings look modest.
And, as Jevons Paradox would predict, total AI spending is exploding. Enterprise AI revenue went from $1.7B in 2023 to $37B in 2025 — a 22x increase in two years. Global AI spending is projected to exceed $2.5 trillion in 2026. Gartner recently moved up its forecast that AI will account for one-third of all IT spending by two full years. But as with taxis, the question is not whether the pie grows. The question is who gets to eat.
The Dispatcher Problem
A popular thesis holds that as the marginal cost of intelligence falls toward zero, the winning business model is AI Services (or, as some have put it, “Service-as-Software”). The idea is that such startups deliver outcomes that replace labor or outsourced services, rather than selling tools that augment them. Foundation Capital frames this as a $4.6T opportunity: whereas IT budgets comprise 1-2% of GDP, labor and traditional services account for more than 15%. The math is seductive. If your AI can do the work of an accountant, a paralegal, or a compliance analyst, shouldn’t you be able to price against the fully-loaded cost of that employee?
In theory, yes. In practice, startups shouldn’t expect to rip, replace, and capture such budgets long-term simply by offering an analog product. This comes back around to our point that services are, by definition, commoditizable (a dynamic we explored extensively in our earlier analysis of layer commoditization cycles). Today, the appeal for replacement is obvious — it’s cheaper and faster. By and large, however, the startups growing through distribution of these AI alternatives do not own the IP that enables this economic arbitrage (the cost differential between an LLM-delivered output and a human-delivered one). The labs that own LLMs do. Without durable moats of their own, an AI Services startup is a reseller of intelligence — and, in our view, basic workflow orchestration, RAG, or domain-specific fine-tuning don’t count as durable moats.
This is the taxi dispatcher problem applied to AI. Before Uber, a taxi dispatch agency captured margin by providing ride-matching; drivers received steady work, and ride-hailers received consistent service with a single point of contact. There was some degree of defensibility in aggregating supply (driver density in a region) and demand (awareness in that region). When a platform emerged that could not only match supply and demand more efficiently but also massively expand supply and lower the cost of expansion by outsourcing car ownership—critically, offering lower costs to riders in the process—the dispatcher’s ability to compete evaporated.
The dispatchers didn’t lose because they couldn’t compete with Uber’s take rate — and this point deserves emphasis. Today, Uber takes ~30% of driver revenues on average, which is not wildly different from the 30-50% that traditional taxi agencies, medallion lessors, and dispatchers collectively extracted from drivers under the old model. Uber’s moat didn’t come from extracting less; it came from consolidating every intermediary function — dispatch, payment, matching, reputation — into a single platform that owned the network. When you are the network, you don’t need to compete on take rate. You are the infrastructure.
Similarly, an AI Service company whose primary value is “we deliver this service cheaper by using LLMs” is a dispatcher sitting on a proprietary margin advantage that doesn’t belong to them. It belongs to the cost curve of inference — and that curve is controlled by model labs, hyperscalers, chip manufacturers, and energy producers. When models get cheaper (and they will), or when a competitor plugs into the same model API and undercuts on pricing (and they will), AI Services startups’ cost advantage compresses toward zero. There are ~35k AI wrapper apps globally today, with significantly more competition—thanks to lower entry and software development costs—than in prior technology eras.
In other words, yes, AI will allow companies to deliver services at radically lower cost. That is resulting in impressive top-line growth for many AI Services offerings as businesses make the obvious choice to switch to cheaper alternatives. But the ability to deliver a service cheaply is not the same as the ability to retain the margin from delivering it. The consumer surplus created by collapsing intelligence costs will be enormous. Who captures that surplus durably is the defining question of the current moment in enterprise AI.
Embededness & Defensibility
The companies that will capture and retain the surplus from collapsing intelligence costs are those that can build defensibility beyond the cost curve. Historically, in enterprise technology, that defensibility has come from a consistent set of sources, which we discussed at length in our essay, “Dude, Where’s My Moat?“ There, we evaluated the relative importance of various moats for Vertical AI, based on the stage of the company:
At the inception stage, most advantage derives from moats that quickly degrade: domain expertise and speed & execution. Partnership and integration relationships are a durable moat, but they become less relevant at scale. The most critical moats at the growth stage are usage and data loops, as we described above. In all likelihood, we believe that a defensible Vertical AI business at scale would have, at minimum, moats in data gravity, brand & trust, and/or platform lock-in.
Setting aside the universal advantages that stem from the founding team (expertise and velocity), these moats all share a common origin: they arise from being deeply interconnected with the customer’s business. And this point brings us to the crux of our thesis. The most important axis for evaluating a Vertical AI business is not “service vs. software”; it is internal vs. external.
By “internal,” we don’t necessarily mean a product with a traditional SaaS UI that the customer “logs into” every day. In fact, the long-standing software-industry consensus that value is correlated with direct, hands-on keyboard usage is, in our view, dead.1 We mean, rather, is the AI company embedded in the customer’s operations in a way that makes it structurally difficult to remove? Does it hold proprietary data that the customer generated? Does it connect the customer to counterparties, suppliers, or ecosystems that would be painful to rewire? Is it integrated into adjacent workflows in a way that removing it would cause cascading disruptions?
In contrast, “external” solutions mirror traditional services vendors. The customer calls on the AI Services startup when they need a task done — akin to a BPO, albeit faster and cheaper — but may turn to a different one next week, if it offers a better deal. The customer shares what’s needed to get that job done, but likely little else. Ongoing interaction outside the “work to be done” is limited. They deliver real value. They will grow quickly, while cost differentials are large and adoption is nascent. But by sitting on borrowed margin—cost arbitrage enabled by the model layer, not from their own IP—they are exposed to the same competitive dynamics as everyone else, including not only other AI Services startups but also deep-pocketed SaaS incumbents and (perhaps eventually) even the buyers themselves.
Even if they own more end-to-end workflows or guarantee outcomes, Internal AI doesn’t just deliver a service; it becomes a deeply embedded partner to its customers. AI, in fact, offers a much greater surface area to do so than the SaaS model ever did — a theme we explored in “Emerging Playbooks in Vertical AI,” where we traced the evolution from authoring layers to Systems of Intelligence. Internal AI platforms can embed themselves in every workflow, accumulate proprietary data every step of the way, and use that positioning to build advantages (and switching costs) that compound over time. As the model layer commoditizes further, internal AI will be insulated because the customer isn’t just paying for an inference portal, but for a System of Intelligence & Action that rivals the stickiness of traditional software Systems of Record.
To make this framework concrete, we can map the Vertical AI landscape along two axes: internal vs. external (how deeply embedded the product is in a customer’s operations) and wedge vs. platform (the breadth and depth of the product offering today). This produces four quadrants, each with distinct risk profiles and trajectories:
In the upper-right quadrant — Durable — sit Internal AI platforms: Systems of Intelligence & Action with a clear path to compounding moats, usually having evolved from an initial wedge into a multi-product platform deeply embedded in customer workflows. Companies like Abridge and EvenUp exemplify this trajectory. The upper-left — Rare — captures external-facing platforms: often consultative, high-ACV plays that may be dog-fooding an internal AI product. These can work, but high customer concentration and limited embeddedness make them unstable. The lower-left — Commodity Risk — is the danger zone: external wedge products with extreme early growth potential but existential risk from competing on borrowed AI margin. The lower-right — Precarious — represents internal wedges with high early growth potential that can extend into defensible platforms, but face meaningful risk from AI-forward incumbents who may replicate the wedge.
Importantly, the blue arrows on the chart illustrate the two valuable transition paths: from external to internal (deepening embeddedness) and from wedge to platform (building product breadth). The wedge-to-platform transition is a time-tested model for building durable Vertical Software businesses; newer is the approach of startups attempting to make both jumps, starting with an external, highly scalable AI Services wedge.
Our overall point is not that AI Services built on LLM infrastructures are inherently a bad initial wedge (or business model). Rather, we believe they must either forge their path to defensibility—most likely through some form of customer internalization—or face eventual commoditization.
Vertical AI: A Sanctuary from Commoditization
This framing of what wins in AI reminds us that success in the current era — while inherently disruptive — must pursue many of the same goals as traditional SaaS companies in terms of customer relationships. It is also a key reason we believe Vertical AI is so powerful. The unique dynamics of every industry offer fertile ground for building differentiated solutions that become deeply internalized by customers — as we argued in “The Future of AI is Vertical,” where we first laid out the thesis that vertical markets would be the near-term winners as LLMs’ performance increases and costs decrease.
The best vertical SaaS companies — Veeva, Procore, Toast, ServiceTitan, etc. — didn’t win because they were cheaper than the alternative. They won because they became the system of record that more closely mirrored users’ very particular needs. With millions in consulting spend, an enterprise might shape a Salesforce or NetSuite to suit its needs, but why do that when there is a system built for you? Along the way, Vertical SaaS platforms captured proprietary industry or first-party data (clinical trial data, job costing data, restaurant sales data) that made the product better the longer you used it. They connected fragmented, vertically unique ecosystems (pharma and clinical sites, GCs and subcontractors, restaurants and delivery networks) in ways that created network-effect moats. As we discussed in our two-part series on market sizing in Vertical AI, these platforms often start with narrow beachheads before expanding into much larger TAMs through product layering — a dynamic that AI is accelerating.
The Vertical AI startups — whether they consider themselves AI Services or not — that will durably capture the surplus from collapsing intelligence costs will follow the same playbook. The wedge may be a service delivered more cheaply. But the moat will be a system built on top of that wedge, leveraging their internal positioning with customers to develop moats—the proprietary data, the network effects, the multi-product platform, and the industry “brain”—that make it infrastructure to be relied upon, rather than just another vendor.
Those that never make that leap — that remain external providers of AI-delivered services, competing on cost — will face the same fate as taxi dispatchers. They’ll watch their market grow 500% while margins compress toward zero.
Winners Will Embrace Commoditization
In a Vertical Collective Roundtable late last year, a founder shared an insight around AI commoditization that stuck with us:
[Many] think that race to the bottom is a bad thing… we think it’s the opposite… the real unlock is new value creation.
This may seem contradictory — we’ve argued that cost competition alone is fatal. The distinction is intent. Racing to the bottom on price is deadly if that’s all you do. It’s powerful if it’s a deliberate wedge to win positioning from which to build the moats described above.
Some Vertical AI startups should embrace and accelerate the race to the bottom in pricing that will come with AI Services commoditization. They have the opportunity to attract many customers by offering shockingly low prices that traditional players cannot match. Yes, they cannibalize the per-customer “outcome” revenue opportunity. But they also win fast growth, industry trust, and the right to serve that customer in other ways.
Moreover, by weakening the position of market leaders that can’t compete, the Vertical AI startup can create a competitive vacuum, establishing a pole position from which to expand. Serving up that surplus on a silver platter can be a trust-building hack like no other. We covered variations of this strategy—which we call “Nuking Pricing Power”— in our prior essay on product commoditization, “A Guide to Disrupting Incumbents:”
Develop or support a lower-priced (or free) version of your complement, incentivizing fast adoption and decreasing pricing power of your complement.
From our earlier essay on incumbent disruption here.
A simpler way to put it is: if your product is going to be commoditized, you might as well do it yourself to win the market.
The Value Hypothesis
Every paradigm shift in enterprise technology produces a land grab — and, inevitably, a shakeout. Cloud computing launched thousands of SaaS startups between 2005 and 2015; most were absorbed, acqui-hired, or zeroed out, while a small cohort graduated into durable, category-defining platforms. We expect the same pattern in Vertical AI, but with greater ultimate market opportunity, faster potential growth, creative new monetization models, greater early capital efficiency, and — for all those reasons — unprecedented levels of competition.
The wedge that enables much of the current generation of app-layer startups is cheap intelligence. The trap for AI Services founders is mistaking a scalable wedge for a defensible business. The companies that will endure are those that use the current window, while cost differentials are large, adoption is nascent, and incumbents are slow to embed themselves so deeply in their customers’ operations that switching becomes structurally painful, not just inconvenient. As we wrote in “Early-Stage VC in the Age of Vertical AI,” the profiles of Vertical AI company-building are forcing investors and founders alike to reimagine what success looks like. This is no exception.
This is not a new idea. It is, in fact, the oldest idea in enterprise software, rediscovered. What’s new is the surface area: SaaS companies could embed in a few workflows and capture data from the screens users interact with. AI-native platforms can embed in every workflow, capture data from every interaction — whether a human is present or not — and build compounding intelligence that makes the product better the longer it runs. The opportunity to build “load-bearing infrastructure” has never been greater. Neither has the temptation to settle for being a “cheaper vendor.”
As Benchmark Capital and Wealthfront co-founder Andy Rachleff argued, the “value hypothesis” of a startup — the what, the who, and the how of demand — is "seldom correct" on the first attempt, because founders must discover who is truly desperate for their product… not just who says they're interested. This is also why we’ve argued that when megafunds suggest that market winners are obvious within two years of founding, these are logical, if not self-serving, positions for their fund models, but not how markets behave or category-winners emerge.
Customers are always interested in cheaper services, and AI can help deliver them. What customers truly want — and what they’ll pay to retain — is a system that knows their business better than they do: one that compounds institutional knowledge, connects them to their ecosystem, and becomes more internally valuable with every interaction. Building that system is harder than reselling cheap inference. But it’s the only thing worth building.
Thanks for reading Euclid Insights! Euclid is a VC partnering with Vertical AI founders at inception. If anyone in your network is working on a new startup in this space, we’d love to help. Just drop us a line via DM or in the comments below.
To expand on this point: if AI can make human workers more efficient, the seat-based SaaS model popularized by Salesforce no longer makes sense; the better your product is, the less they would spend (at least in the short term). “Screen time,” moreover, is irrelevant if the objective of an autonomous agent is to execute tasks—drafting contracts, resolving customer support tickets, reconciling financial ledgers—without substantial human intervention.
In this new paradigm, efficiency may come to be defined by the absence of screen time. Finally, UI itself is becoming fungible. As we discussed in a recent episode of our podcast, Verticals, with Euclid portfolio founder Mike Powers, while the “decision layer” of data, actions, and records is as important as ever, we’re entering a world where no customer has the same UI. Two ways we see this playing out: interfaces auto-generated by a platform, unique to each user (”inception software,” one of our 2026 predictions); and “Bring Your-Own UI” (BYOUI), whether agentic via MCP or through an LLM-spun custom app.
Interesting article