Behind the $1 Trillion Evaporation: The Moats of Vertical Software Are Being Rewritten by Large Models

Image

LLMs are systematically dismantling the moats that vertical software relied on for survival. The days of earning high premiums based on "difficult-to-use software" and "complex processes" are over. The market is undergoing a brutal value revaluation.

Author | Kozmon

Editor | Hard AI

Fintool founder Nicolas Bustamante recently posted a devastating deep-dive article on platform X, directly pointing out the cruel truth behind the evaporation of nearly $1 trillion in market capitalization from software stocks.

As a "dual-track" entrepreneur who once built Europe's largest legal tech platform (Doctrine) and is now dedicated to AI finance (Fintool), he stands at the junction of the old and new eras, detailing how the ten major moats that the vertical SaaS industry relies on for survival are being dismantled one by one by large models.

Nicolas believes that LLMs (Large Language Models) are systematically dismantling the moats that vertical software relied on for survival. The days of earning high premiums based on "difficult-to-use software" and "complex processes" are over, and the market is undergoing a brutal value revaluation.

We have summarized the key points of this article for you:

1. "Difficult to use" is no longer a moat

Previously, software like the Bloomberg Terminal had its strongest moat in being "difficult to use." Users spent a long time learning complex keyboard shortcuts and codes, and once learned, they didn't want to switch. But now, LLMs have collapsed all complex interfaces into a single chat box. Users only need to speak human language to retrieve data and tune models. The "proficiency barrier" that Wall Street paid for has instantly vanished.

2. Business logic has changed from "millions of lines of code" to "one Markdown file"

This is the most disruptive point. Previously, replicating a legal or financial workflow required engineers who understood the industry to write code for years; now, a knowledgeable fund manager only needs to write a Markdown document (a prompt skill) telling the AI how to perform a DCF valuation, and the AI can execute it well. The core barrier has shifted from "scarce engineers" to "cheap documents," and the time for competitors to copy you has shortened from years to weeks.

3. Companies making money by organizing public data are in danger

Previously, organizing messy financial reports (SEC files) into searchable data was valuable, but now large models can natively understand these documents. Frontier models (like Claude or GPT) can natively read 10-K annual reports and legal documents; they themselves are the best parsers. The model of making money through "information asymmetry" and "organizing public data" is being ruthlessly commodified by AI. "Scarce talent who understands both code and business" is no longer the bottleneck.

Now, fund managers who understand the business don't need to learn Python; they can direct AI to work using plain language. This means that originally scarce industry experience can be rapidly converted into software products, and competitors will increase.

4. Exclusive data has become the only "gold medal"

Conversely, if you possess "private data" that cannot be scraped or synthesized (such as Bloomberg's real-time trading desk data or S&P's private credit ratings), LLMs will actually multiply your value. Because in the AI era, this data is the "scarce fuel" that all agents crave, and companies with exclusive data will hold absolute pricing power.

5. "Compliance" and "Transactions" are hard bones that AI cannot chew

Don't be too anxious; there are some moats that AI can do nothing about. For example, medical software Epic, whose barriers are HIPAA compliance and FDA certification; or Stripe, whose barriers are licenses and channels for fund processing. No matter how smart AI is, it cannot pass the regulatory hurdle, nor can it move funds without banking rails. As long as your software is "embedded in transactions" or "grows on regulations," you are temporarily safe.

6. The competitive landscape has changed from a "Three Kingdoms" scenario to a "Battle of Hundreds"

Previously, building a vertical SaaS required 200 engineers and a $50 million data budget, so each industry usually had only 2-3 giants monopolizing it. Now, relying on APIs and a few engineers, you can achieve 80% of the functions of the giants, and competitors will instantly go from 3 to 300. This will cause the price system to collapse, and the high valuation multiples of SaaS companies will be completely brought back to reality.

7. The real threat is a "pincer attack"

Vertical SaaS is now attacked from both sides: below, hundreds of AI-native startups are biting crazily; above, giants like Microsoft and Anthropic are directly entering vertical fields through "General Agent + Plugins." Software is becoming "Headless." In the future, users may not open your software at all but directly call your services through AI Agents—if you devolve into a mere "data supplier," your profits will be thoroughly squeezed dry by the platform.

The following is part of the original article:

Deep Dive into Vertical Software for Ten Years: My View on This Sell-Off

In the past few weeks, the market value of software and service stocks has evaporated by nearly $1 trillion. FactSet's market cap fell from a peak of $20 billion to less than $8 billion. S&P Global fell 30% in a few weeks. Thomson Reuters' market cap has shrunk by nearly half in a year. The S&P 500 Software & Services Index, composed of 140 companies, is down 20% year-to-date.

Last week, Anthropic released industry-specific plugins for Claude Cowork. Claude Cowork is an AI Agent designed specifically for knowledge workers, capable of autonomously handling complex research, analysis, and document workflows.

Wall Street calls it panic. And for the past decade, I have been dedicated to building Vertical SaaS. First @Doctrine, now Europe's largest legal information platform (competing with LexisNexis, Westlaw, etc.); then @fintool, an AI-driven equity research platform competing with Bloomberg, FactSet, and S&P Global in the US.

"I have built the software that is now threatened by Large Language Models (LLMs). And what I am building now is precisely the software that poses that threat. I am at both ends of this disruptive change."

Here is the truth I see: LLMs are systematically dismantling the moats that make vertical software defensible. But not all of them. The result is that the value composition of vertical software and the valuation multiples it deserves are being redefined.

In this article:

The ten moats that make vertical software defensible, and the impact of LLMs on each one

Why the market sell-off is structurally rational but exaggerated in timing

What the real threat actually is (not what you think)

What will replace vertical software

Where the vertical software industry goes next

01

Vertical Software's Ten Moats (and LLM's Impact on Each)

Vertical software is software built for specific industries. For example, Bloomberg in finance, LexisNexis in law, Epic in healthcare, Procore in construction, and Veeva in life sciences.

These companies share a defining characteristic: high fees and extremely low customer churn. FactSet charges over $15,000 per user per year. A Bloomberg Terminal seat costs $25,000. LexisNexis charges law firms thousands of dollars per month. Their customer retention rates hover around 95%.

I believe there are ten distinct moats. LLMs are attacking some of them while preserving others. Understanding which are attacked and which are preserved is the whole game.

Image

1. Learned Interfaces → Destroyed

A Bloomberg Terminal user spends years learning keyboard shortcuts, function codes, and navigation patterns. GP, FLDS, GIP, FA, BQ. These are not intuitive operations; they are a language. Once you are fluent in it, switching to another platform means becoming illiterate again.

I have heard this countless times: "We are a FactSet shop." "We are a Lexis firm." "We are a Bloomberg house." These statements are not about data quality or feature sets. They are declarations of software muscle memory. People spent a decade learning this tool. That investment is non-transferable.

This is the most underrated moat. Knowledge workers pay to avoid relearning workflows they have mastered for ten years. The interface itself is a large part of the value proposition.

I experienced this firsthand at Doctrine. We had a team of designers and a large Customer Success Manager (CSM) team whose entire job was to guide lawyers to use our interface. Every UI change was a project: user research, design sprints, cautious releases, hand-holding. We would spend weeks redesigning a faceted search filter because lawyers had built muscle memory around the old filter. The interface wasn't a feature; it was the product. Maintaining it was one of our largest cost centers.

At Fintool, we have no onboarding. No CSMs teaching people how to navigate the product. Our users type what they want in simple English and get answers. There is no interface to learn because everything is a conversation. The entire cost center—designers, CSMs, UI change management—simply doesn't exist. The chat interface absorbed all these support structures.

"LLMs collapse all proprietary interfaces into a chat window."

Image

Consider what a financial analyst does on a Bloomberg Terminal today. They navigate to the stock screening function. Use specialized syntax to set parameters. Export results. Switch to the DCF (Discounted Cash Flow) model builder. Input assumptions. Run sensitivity analysis. Export to Excel. Create a presentation.

Every step requires learned interface knowledge. Every step reinforces switching costs.

Now consider what the same analyst does with an LLM agent:

"Show me all software companies with a market cap over $1 billion, a P/E ratio below 30, and revenue growth exceeding 20% year-over-year. Build DCF models for the top 5. Run sensitivity analysis on the discount rate and terminal growth rate."

Three sentences. No keyboard shortcuts. No function codes. No navigation. The user doesn't even know which data provider the LLM queried. They don't care.

When the interface becomes a natural language conversation, years of muscle memory become worthless. The switching cost that justified a $25,000 annual seat fee dissolves instantly. For many vertical software companies, the interface was the majority of the value. The underlying data was often licensed, public, or semi-commoditized. What supported the premium pricing was the workflow built on top of the data. That is over.

2. Custom Workflows and Business Logic → Evaporated

Vertical software encodes how an industry actually works. A legal research platform doesn't just store case law. It encodes citation networks, Shepardize signals, headnote taxonomies, and the specific way litigators write briefs.

Building this business logic takes years. It reflects thousands of conversations with domain experts. When I built Doctrine, the hardest part wasn't technology. It was understanding how lawyers actually work: how they research case law, how they draft documents, how they build litigation strategies from filing to trial. Encoding that understanding into working software is what makes vertical software valuable—and defensible.

LLMs turn all of this into a Markdown file.

This is the most underrated shift, and I think the most destructive in the long run.

Traditional vertical software encodes business logic in code. Thousands of if/then branches, validation rules, compliance checks, approval workflows. Hard-coded by engineers over years... and not just ordinary engineers. You need software engineers who truly understand the field, which is rare. Finding someone who can write production-grade code and also understand how litigation workflows work, or how DCF models should be constructed, is extremely difficult. Modifying this business logic requires development cycles, QA (Quality Assurance), and deployment.

Let me give you a concrete example from my own experience.

At Doctrine, we built a legal research workflow that helped lawyers find relevant case law for a given legal issue. The system needed to understand the legal domain (civil vs. criminal vs. administrative), parse the question into searchable concepts, query across multiple court databases, rank results by relevance and authority, and present the correct citation context. Building this system required a team of engineers and legal experts years of work. The business logic was scattered across thousands of lines of Python code, custom ranking algorithms, and manually tuned relevance models. Every modification required engineering sprints, code reviews, testing, and deployment.

At Fintool, we have a DCF valuation Skill. It tells the LLM agent how to perform a discounted cash flow analysis: what data to gather, how to calculate WACC (Weighted Average Cost of Capital) by industry, what assumptions to validate, how to run sensitivity analysis, when to add back stock-based compensation. It is a Markdown file. Writing it took a week. Updating it takes minutes. A portfolio manager who has done 500 DCF valuations can encode their entire methodology without writing a single line of code.

Image

"Years of engineering versus a week of writing. That is the shift."

And it's not just speed. The Markdown skill performs better in important ways. Anyone can read it. It is auditable. It can be customized for every user (our customers write their own skills). It automatically gets better as the underlying model improves, without us touching a line of code.

Business logic is migrating from code written by specialized engineers to Markdown files that anyone with domain expertise can write. The accumulated business logic that vertical software companies spent a decade building can now be replicated in weeks. The workflow moat is eroding extremely fast.

3. Public Data Access → Commoditized

A large part of the vertical software value proposition is making hard-to-access data easy to query. FactSet makes SEC (U.S. Securities and Exchange Commission) filings searchable. LexisNexis makes case law searchable. These are genuine services. SEC filings are technically public, but try reading a raw 200-page 10-K report in HTML format. The structure is inconsistent across companies. Accounting terminology is obscure. Extracting the actual numbers you need requires parsing nested tables, tracking footnote references, reconciling restated data.

Before LLMs, accessing this public data required specialized software and massive engineering support structures. Companies like FactSet built thousands of parsers for every filing type, every company's unique format. As formats changed, swarms of engineers maintained these parsers. The code that turned raw SEC filings into queryable data was a real competitive advantage.

At Doctrine, this was also massive work. We built NLP (Natural Language Processing) pipelines for different case laws: Named Entity Recognition (NER) for extracting judges, courts, legal concepts. Specialized machine learning models classified rulings by legal domain. Custom parsers for each court, each with its own formatting quirks. We had engineers who spent years building and maintaining this support structure. It was genuinely impressive technology and a real moat because replicating it meant years of work.

At Fintool, we built none of this. Zero NER. Zero custom parsers. Zero industry-specific classifiers. Why? Because frontier models already know how to navigate 10-K reports. They know Home Depot's ticker is HD. They understand the difference between GAAP (Generally Accepted Accounting Principles) and non-GAAP revenue. They can parse nested tables in segment disclosures without being taught the pattern. The parsing infrastructure that Doctrine spent years building is now a commodity capability included for free with the model.

LLMs make this trivial. Frontier models already know how to parse SEC filings from their training data. They understand the structure of a 10-K, where to find revenue recognition policies, how to reconcile GAAP and non-GAAP data. You don't need to build a parser. The model is the parser. Feed it a 10-K, and it can answer any question about it. Feed it the entire corpus of federal case law, and it can find relevant precedents.

The parsing, structuring, and querying that vertical software spent decades building is now a commodity capability built into the foundation models themselves. The data isn't worthless. But the "make it searchable" layer—that was where a lot of value and pricing power sat—is collapsing.

4. Talent Scarcity → Inverted

Building vertical software requires people who understand both the domain and technology. Finding an engineer who can write production-grade code and also understand how credit derivative structures work is extremely rare. This scarcity created a natural barrier to entry, historically limiting the number of capable competitors in any vertical.

LLMs completely flipped this moat.

At Doctrine, recruiting was brutal. We didn't just need good engineers. We needed engineers who could understand legal reasoning: how precedent works, how jurisdictions interact, what an appeal to the Supreme Court looks like. These people barely existed. So we trained them ourselves. Every week, we held internal lectures where lawyers taught engineers how the legal system actually works. A new engineer took months to become productive. Talent scarcity was a real barrier, not just for us, but for anyone trying to compete with us.

At Fintool, we don't do any of this. Our domain experts (portfolio managers, analysts) directly write their methodologies into Markdown skill files. They don't need to learn Python. They don't need to understand APIs. They write in simple English what a good DCF analysis looks like, and the LLM executes it. The engineering part is handled by the model. Domain expertise, the once-scarce resource, is suddenly abundant in its ability to translate into software, without the engineering bottleneck.

LLMs have made engineering accessible, which means the scarce resource (domain expertise) has suddenly become abundant in its ability to convert into software. This is why barriers to entry are collapsing so dramatically.

5. Bundling → Weakened

Vertical software companies expand by bundling adjacent capabilities. Bloomberg started with market data, then added messaging, news, analytics, trading, and compliance. Each new module added switching costs because customers now relied on the entire ecosystem, not just one product. S&P Global's $44 billion acquisition of IHS Markit was exactly this strategy. The bundle became the moat.

At Doctrine, bundling was the growth strategy. We started with case law search, then added legislation, legal news, alerts, then document analysis. Each module had its own UI, its own onboarding, its own customer workflow. We built elaborate dashboards where lawyers could configure watch lists, set automatic alerts for specific legal topics, and manage their research folders. Every feature meant more design work, more engineering, more UI surface area. The bundle locked customers in because they had built their entire workflow around our ecosystem.

LLM agents break the bundling moat because the agent itself is the bundle. At Fintool, alerts are a prompt. Watch lists are a prompt. Portfolio screening is a prompt. There are no separate modules for each function. No UI to maintain. The customer says "Alert me when any company in my portfolio mentions tariff risk in an earnings call," and it works. The agent orchestrates ten different specialized tools in a single workflow. It can pull market data from one source, news from another, run analysis through a third, and compile the results. The user never knows or cares that five different services were queried.

Image

When the integration layer shifts from the software vendor to the AI agent, the incentive to buy a bundle evaporates. Why pay a Bloomberg premium for the entire suite when the agent can pick the best (or cheapest) provider for each capability?

This doesn't mean bundling dies overnight. The operational complexity of managing ten vendor relationships versus one is real. But the directional pressure is clear: agents make "unbundling" feasible in ways that were previously impossible.

6. Private and Proprietary Data → Stronger

Some vertical software companies own or license data that doesn't exist anywhere else. Bloomberg collects real-time pricing data from trading desks globally. S&P Global owns credit ratings and proprietary analytics. Dun & Bradstreet maintains business credit files on over 500 million entities. This data was collected over decades, often through exclusive relationships. You can't scrape it. You can't recreate it.

If your data is truly unreplicable, LLMs make it more valuable, not less.

Bloomberg's real-time pricing data from trading desks? Can't be scraped. Can't be synthesized. Can't be licensed from third parties. In an LLM world, this data becomes the scarce input that every agent needs. Bloomberg's pricing power on proprietary data may actually increase.

The same applies to S&P Global's credit ratings. A credit rating isn't just data. It's an opinion backed by regulated methodologies and decades of default data. An LLM cannot issue a credit rating. S&P can.

The test is simple: Can this data be accessed, licensed, or synthesized by others? If not, the moat remains. If yes, you are in trouble.

I have seen this evolution at both companies. When we founded Doctrine, the core value was organizing public case law with industry-specific support structure layers: taxonomies, citation networks, relevance ranking. But the team realized early on that public data alone was not enough.

About five years ago, Doctrine started building an exclusive content library: proprietary legal annotations, editorial analysis, curated commentary that didn't exist anywhere else. Today, this library is really hard to replicate, and it has become a real moat. Coupled with a full pivot to LLM, Doctrine is now thriving!

The companies that survive this transition are those moving from "we organize public data better" to "we own data you can't get elsewhere."

The change is this: that intelligent layer used to require years of engineering. Now it's a capability that comes with the model. Even data access itself is being commoditized.

MCP (Model Context Protocol) is turning every data provider into a plugin. Dozens of companies already offer financial data as MCP servers that any AI agent can query. When your data is available as a Claude plugin, the premium for "making it accessible" disappears.

The irony is that LLMs accelerate this bifurcation. Companies with proprietary data win bigger. Companies without it lose everything.

If your data isn't truly unique—meaning it can be scraped, licensed, or synthesized elsewhere—you are not safe. You are at risk of commoditization. The AI agent will own the relationship with the customer. It is the interface the user interacts with, the brand they trust, the product they pay for. You become the agent's supplier, not the customer's supplier.

This is Aggregation Theory playing out in real-time: the aggregator (the agent) captures the user relationship and the profit, while the supplier (the data provider) competes on price to feed the platform. If Bloomberg, FactSet, and a dozen smaller providers all offer similar market data, the agent will route to the cheapest one. Your pricing power evaporates. Your margins compress. You become a commoditized input to someone else's product.

7. Regulatory and Compliance Lock-in → Structurally Sound

In healthcare, Epic's dominance isn't just about product quality. It's about HIPAA (Health Insurance Portability and Accountability Act) compliance, FDA (Food and Drug Administration) certification, and the 18-month implementation cycle hospitals must endure. Switching EHR (Electronic Health Record) vendors is a multi-year, multi-million dollar project that literally jeopardizes patient safety. In financial services, compliance requirements create similar lock-in. Audit trails, regulatory reporting, data retention policies. All of this is baked into the software.

HIPAA doesn't care about LLMs. FDA certification doesn't get easier because GPT-5 exists. SOX (Sarbanes-Oxley Act) compliance requirements don't change because Anthropic released a new plugin.

Epic's dominance in healthcare EHR is fundamentally a regulatory moat. The 18-month implementation cycle, compliance certifications, integration with hospital billing systems. None of this is impacted by LLMs.

In fact, regulatory requirements may slow LLM adoption precisely in the verticals where compliance lock-in is strongest. Hospitals can't replace Epic with an LLM agent because the LLM agent isn't HIPAA certified, doesn't have the required audit trails, and isn't FDA validated for clinical decision support.

8. Network Effects → Sticky

Some vertical software becomes more valuable as more industry participants use it. Bloomberg's messaging function (IB chat) is Wall Street's de facto communication layer. If every counterparty uses Bloomberg, you must use Bloomberg. Not because of the data. Because of the network.

LLMs won't break network effects. If anything, they might make communication networks more valuable. The information flowing through these networks becomes training data, context, signal.

This also applies to any vertical software acting as a communication layer within an industry. Veeva's network effects among pharmaceutical companies. Procore's network effects among construction stakeholders. These are sticky because value comes from who else is on the platform, not from the interface.

9. Transaction Embedded → Durable

Some vertical software sits directly in the flow of funds. Payment processing for restaurants. Loan origination for banks. Claims processing for insurance companies. When you are embedded in the transaction, switching means disrupting revenue. No one does that voluntarily.

If your software processes payments, originates loans, or settles transactions, LLMs won't disintermediate you. They might sit on top of you as a better interface, but the rails themselves remain critical.

Stripe is not threatened by LLMs. Neither are FIS or Fiserv. The transaction processing layer is infrastructure, not interface.

10. System of Record Status → Long-term Threatened

When your software is the canonical source of truth for business-critical data, switching isn't just inconvenient. It's existential risk. What if data is corrupted during migration? What if history is lost? What if audit trails break?

Epic is the system of record for patient data. Salesforce is the system of record for customer relationships. These companies benefit from the asymmetry between the cost of staying (high fees) and the cost of leaving (potential data loss, operational disruption).

LLMs don't directly threaten system of record status today. But agents are quietly building their own systems of record.

What is happening is this: AI agents don't just query existing systems. They read your SharePoint, your Outlook, your Slack. They collect data about users. They write detailed memory files that persist between sessions. When they perform critical actions, they store that context. Over time, the agent accumulates a richer, more complete map of a user's work than any single system of record.

The agent's memory becomes the new source of truth. Not because anyone planned it, but because the agent is the layer that sees everything. Salesforce sees your CRM data. Outlook sees your email. SharePoint sees your documents. The agent sees all three and remembers.

This won't happen overnight. But directionally, agents are building their own systems of record from scratch. As agent context memory grows, traditional system of record moats weaken.

02

Final Result: Lower Barriers to Entry

Add all this up. Five moats destroyed or weakened. Five remain solid. But those broken moats are precisely the ones that kept competitors out. Those solid moats are ones only some incumbent giants possess.

Before LLMs, building a credible competitor to Bloomberg or LexisNexis required hundreds of engineers who understood the domain, years of development time, massive data licensing deals, sales teams capable of selling to conservative enterprises, and regulatory certifications. Result: most verticals had only 2-3 real competitors.

After LLMs, a small team using frontier model APIs, domain expertise, and a good data pipeline can build a product that handles 80% of vertical software functionality in a few months. I know this because I've done it. Fintool was built by a six-person team. We serve hedge funds that previously relied entirely on Bloomberg and FactSet. Not because we have better data. But because our AI agent delivers answers faster and more intuitively than the terminal/workstation that takes years of training to master.

The critical insight is that competition doesn't increase linearly—it explodes combinatorially. You don't go from 3 incumbents to 4. You go from 3 to 300. This is what destroys pricing power. Before LLMs, each vertical had 2-3 dominant players because barriers to entry were insurmountable, so they possessed premium power. When 50 AI-native startups can offer 80% of capabilities at 20% of the price, that math completely changes.

03

Key: This is a Multi-Year Transition, Not an Overnight Collapse

This is where I think the market is wrong on timing, even if directionally correct.

"Enterprise revenue won't disappear overnight."

FactSet customers are on multi-year contracts. Bloomberg Terminal contracts are typically at least 2 years. These contracts don't evaporate just because Anthropic released a plugin.

Enterprise procurement cycles are measured in quarters and years, not days. A $50 billion hedge fund won't dismantle S&P Global CapIQ tomorrow just because Claude can query SEC filings. They will evaluate alternatives over 12-18 months. They will run pilot programs. They will negotiate contract terms. They will wait for existing contracts to expire.

"The revenue cliff is real, but it's a slope, not a cliff. Current revenue is largely locked in for the next 12-24 months."

But what the market has understood is this: you don't need revenue to drop for stock prices to crash. You need valuation multiples to compress. A financial data company might trade at 15x revenue when it has pricing power and 95% retention, and trade at 6x revenue when the market believes both are eroding. Revenue stays flat. Stock drops 60%. That is exactly what is happening to some companies right now.

The market isn't pricing in a revenue collapse. It is pricing in the end of premium multiples because the moats justifying those multiples are dissolving.

04

The Real Threat

The real threat isn't the LLM itself. It is a pincer movement that vertical software incumbents didn't anticipate.

From below, hundreds of AI-native startups are entering every vertical. When building a credible financial data product required 200 engineers and a $50 million data license, the market naturally consolidated into 3-4 players. When it requires 10 engineers and a frontier model API, the market drastically fragments. Competition goes from 3 to 300.

Image

From above, horizontal platforms are going deep into verticals for the first time. Microsoft Copilot inside Excel can now do AI-driven DCF modeling and financial statement parsing. Copilot inside Word can do contract review and case law research. Horizontal tools are becoming verticalized through AI, not engineering.

Anthropic is doing the same thing from another direction. I'm watching this closely because Fintool is an Anthropic-backed company. Claude is going all-in on verticals. The playbook is terrifyingly simple: a general agent harness (SDK), pluggable data access (MCP), and domain-specific skills (Markdown files). That's it. That's the entire tech stack you need to go from horizontal to vertical. No domain engineers. No years of development.

Software is becoming headless. The interface disappears. Everything flows through the agent. What matters is no longer the software. It's owning the customer relationship and use cases, which means owning the agent itself.

The technology that enables vertical depth (LLMs + Skills + MCP) is precisely what allows horizontal platforms to finally compete in areas they couldn't touch before. This is perhaps the most existential threat to vertical software: horizontal B2B players like Microsoft are no longer just dabbling in verticals; they are actively expanding into them because it's easier than ever, and because they need to own the use cases and workflows to remain relevant in an AI-first world.

05

Risk Assessment Framework

Not all vertical software faces the same risk. Here is how I think about which categories survive and which don't.

High Risk: The Search Layer

If your primary value is making data searchable and accessible through a specialized interface, and the underlying data is public or licensable, you are in serious trouble. This includes financial data terminals built on licensed exchange data, legal research platforms built on public case law, patent search tools, and any vertical where the product is essentially "we built a better search engine for your industry's data."

These companies traded at 15-20x revenue because of interface lock-in and limited competition. Both are evaporating. Look at the financial data providers who have lost 40-60% of their market cap in the last year. The market is right to reprice them.

Medium Risk: The Mixed Portfolio

Many vertical software companies have a mix of defensible and exposed business lines. A company might have a truly proprietary ratings business but also a data analytics division that mostly repackages public information. Or an index licensing business (embedded in transactions, very defensible) next to a research platform (pure search layer, very exposed).

The stock declines (20-30%) in this category reflect market uncertainty about which part of the business dominates valuation. The key question is: what percentage of revenue comes from moats LLMs can't touch?

Low Risk: Regulatory Fortresses

If your moat is regulatory certification, compliance infrastructure, and deep integration with mission-critical workflows, LLMs are largely irrelevant to your competitive position in the medium term. Medical EHR systems with HIPAA compliance and FDA validation. Life sciences platforms with regulatory lock-in. Financial compliance and reporting infrastructure.

These companies may even benefit from AI disruption elsewhere, as customers consolidate around their trusted regulated workflow vendors while rotating away from vendors they used for information retrieval.

06

The Test

For any vertical software company, ask three questions:

1. Is the data proprietary? If yes, the moat is solid. If no, the accessibility layer is crumbling.

2. Is there regulatory lock-in? If yes, LLMs won't change the switching cost equation. If no, switching costs are largely interface-driven and dissolving.

3. Is the software embedded in the transaction? If yes, LLMs sit on top of you, not replace you. If no, you are replaceable.

Zero "yeses": High risk. One: Medium risk. Two or three: You're probably fine.

07

What I Learned Building on Both Ends

When I started building Doctrine in 2016, one of the moats was the interface. We built a beautiful search experience on top of case law and legislation.

Lawyers loved it because it was faster and more intuitive than anything else on the market. Most of the data was public, but our interface and search made it accessible.

If I were building Doctrine from scratch today, the business would face a fundamentally different competitive landscape. LLM agents can query case law as effectively as our interface did.

The liquidation of Vertical SaaS isn't about the death of all vertical software. It's about the market finally starting to distinguish between companies with truly scarce resources, and those that are defenseless against LLM agents.

Related Articles

分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.