(1) Eating complements doesn't need to involve a major infra shift like Salesforce. It can be as simple as this: find a complement, something your customers also buy alongside your product. Could be a discrete service or small point solution. Offer that as a feature of your product so they no longer need to buy it.
(2) We actually think workflow moats tend to be the strongest, including network effects, product breadth and sometimes regulatory connectivity (think Tyler, Roper, Doximity, Autodesk, Shopify). True data moats are actually pretty rare, but tend to be most common in high-concentration markets like financial services, life sciences, ad tech and defense (the credit bureaus, D&B, Black Knight, Plaid, IQVIA, Google, Liveramp, Palantir)... so in some ways it can be hard to disambiguate whether the moat is actually data or more so scale and access to hard-to-get enterprises.
Really loving the posts you put out Euclid team! A great reminder that these strategies aren't just historical - they're playing out right now across the AI landscape.
I'm curious about two aspects you touched on:
How do you see the "eat the complement" strategy applying specifically to vertical AI startups with limited resources? Is there a capital-efficient way for them to execute this when competing against incumbents with massive war chests?
Your Chegg example highlights a cautionary tale, but what vertical domains do you see as most resistant to general AI commoditization? Are there specific industries where proprietary data combined with domain expertise creates a moat that even AI-native solutions can't easily breach?
Thanks Bocar! Quick thoughts:
(1) Eating complements doesn't need to involve a major infra shift like Salesforce. It can be as simple as this: find a complement, something your customers also buy alongside your product. Could be a discrete service or small point solution. Offer that as a feature of your product so they no longer need to buy it.
(2) We actually think workflow moats tend to be the strongest, including network effects, product breadth and sometimes regulatory connectivity (think Tyler, Roper, Doximity, Autodesk, Shopify). True data moats are actually pretty rare, but tend to be most common in high-concentration markets like financial services, life sciences, ad tech and defense (the credit bureaus, D&B, Black Knight, Plaid, IQVIA, Google, Liveramp, Palantir)... so in some ways it can be hard to disambiguate whether the moat is actually data or more so scale and access to hard-to-get enterprises.
Really loving the posts you put out Euclid team! A great reminder that these strategies aren't just historical - they're playing out right now across the AI landscape.
I'm curious about two aspects you touched on:
How do you see the "eat the complement" strategy applying specifically to vertical AI startups with limited resources? Is there a capital-efficient way for them to execute this when competing against incumbents with massive war chests?
Your Chegg example highlights a cautionary tale, but what vertical domains do you see as most resistant to general AI commoditization? Are there specific industries where proprietary data combined with domain expertise creates a moat that even AI-native solutions can't easily breach?