Menlo Ventures: 2025 State of Enterprise Generative AI Report

2025: The State of Generative AI in the Enterprise

December 9, 2025

Tim Tully, Joff Redfern, Deedy Das, Derek Xiao

AI Boom vs. Bubble

Tracking the Money: Where Are Enterprise Dollars Going?

How AI Enters the Enterprise: The Path to Production

Enterprises Are Buying More Than They Build.

AI Buyers Have Higher Conversion Rates.

PLG: Individual Users Are Driving AI Adoption 4x Faster Than Software

Startups vs. Incumbents: New Entrants Win in AI Applications

AI Applications: A $19 Billion Market

Departmental AI: Coding Is Generative AI's First "Killer App"

Vertical AI: Healthcare Leads Adoption

Horizontal AI: Copilot Spend Dwarfs Agent Spend

AI Infrastructure: $18 Billion for "Picks and Shovels"

LLM Market Share: Anthropic Extends Enterprise Lead

Open Source Models: Enterprise Adoption Lags Broader Ecosystem

AI Infrastructure: The Modern AI Stack, Still Under Development

What’s Next? Predictions for 2026

Final Thoughts

Data Sources and Methodology

AI Boom vs. Bubble

Despite worries about over-investment, AI is being adopted in the enterprise at a rate unprecedented in the history of modern software.

Over the past three years, confidence in the AI space has been high, with capital inflows repeatedly hitting record highs. This wave propelled Nvidia to the throne of the world's most valuable company by market cap. 1 A foundation announced nearly $1 trillion in committed investment for AI infrastructure. Venture capital has also rebounded to all-time highs, with nearly half focused on a few frontier AI labs.

Then, the frenzy peaked. A study from MIT ² claimed that 95% of generative AI projects fail, sending a market shock in the summer and exposing just how quickly market sentiment can shift under the weight of massive AI capital expenditures. The whispers of a bubble turned into a roar.

Given the scale of the data involved, these concerns are not unfounded. But the demand side tells a different story: our latest market data shows that the technology is indeed being widely adopted, with real revenue and productivity scaling up, indicating a market that is in a boom, not a bubble.

Enterprise AI is the fastest-growing software category in history.

Since 2023, the enterprise AI market size has surged from $1.7 billion to $37 billion, now accounting for 6% of the global SaaS market and growing faster than any software category in history.

In Menlo's third annual "State of Generative AI in the Enterprise" report, we surveyed approximately 500 U.S. enterprise decision-makers and combined their insights with a bottom-up generative AI market model covering model APIs, infrastructure, and applications. Since 2023, our team has tracked the evolution of AI from early experimentation to broad enterprise deployment, giving us a years-long understanding of the speed and scale of this transformation.

Tracking the Money: Where Are Enterprise Dollars Going?

Our data shows that in 2025, enterprise spending on generative AI will reach $37 billion, up from $11.5 billion in 2024, representing year-over-year growth of 3.2 times. The largest share ($19 billion) flows to user-facing products and software that utilize underlying AI models—i.e., the application layer. This accounts for over 6% of the entire software market, and all of this has happened within three years of the release of ChatGPT.

The growth of AI extends far beyond a few chat apps, permeating every sector of the economy. We count at least 10 products now with over $1 billion in Annual Recurring Revenue (ARR), and 50 products with ARR exceeding $100 million. This is largely driven by the model APIs that power applications (e.g., Anthropic*, OpenAI, and Google), but the scope of these applications is expanding to include coding, sales, customer support, HR, and other departmental solutions, as well as verticals from healthcare and legal to the creator economy.

Generative AI spend by category 2023-2025

By 2025, more than half of enterprise AI spending will be on AI applications, indicating that modern enterprises prioritize immediate productivity gains over long-term infrastructure investments.

How AI Enters the Enterprise: The Path to Production

Three years later, the path to production for enterprise AI is taking shape. Early adopters lacked experience and knowledge. Today, distinct patterns have emerged that differ significantly from traditional SaaS models. Enterprises show a stronger propensity to buy rather than build and are adopting AI at scale through product-led growth models, something rarely seen in enterprise software.

Enterprises Are Buying More Than They Build

For some time, the prevailing wisdom was that enterprises should build most of their AI solutions themselves. Bloomberg trained BloombergGPT for finance in 2022, and Walmart developed Wallaby for retail in 2024. Teams were confident that with the right data, domain expertise, and necessary frameworks, they could handle everything in-house.

2024 data confirmed this confidence: 47% of AI solutions were built internally, and 53% were purchased.⁵ Today, 76% of AI use cases are purchased rather than built internally. Despite continued heavy investment in internal development, ready-made AI solutions can get to production faster and deliver immediate value as the enterprise tech stack continues to mature.

Build vs Buy Enterprise AI Solutions: 2024 vs 2025

Last year, enterprises were split on whether to build or buy AI solutions. Now, as internal build capabilities mature, more enterprises are deploying off-the-shelf AI solutions in production environments.

AI Buyers Have Higher Conversion Rates.

Enterprise buyers are highly receptive to AI. We found that once an enterprise decides to explore AI solutions, the deal conversion rate is nearly twice that of traditional software: 47% of AI deals go into production, compared to only 25% for traditional SaaS. Such high conversion rates reflect strong intent and clear immediate value from buyers. Our survey data shows that most enterprises list a long list of potential AI use cases—often more than 10—but they focus primarily on short-term productivity gains or cost savings. While enterprises propose slightly more internal use cases (59%) than customer-facing ones (41%), the conversion rates for both are nearly identical, showing that operational AI investments are just as reliable in delivering value as customer-facing innovations.

Conversion Rates AI Buyers vs Traditional Software Buyers

The conversion rate for AI buyers is 47%, compared to 25% for SaaS, indicating that AI provides enough immediate value to streamline standard procurement processes.

PLG: Individual Users Are Driving AI Adoption 4x Faster Than Software

Beyond centralized procurement channels, AI solutions are increasingly entering the enterprise through individual users rather than enterprise executives. We found that 27% of all AI application spending comes from Product-Led Growth (PLG) models, almost four times that of traditional software (7%).

And that number is conservative. If we consider "shadow AI usage"—employees using personal credit cards to buy tools like ChatGPT Plus (about 27% of which is work-related)⁶—then PLG-driven tools could account for nearly 40% of application AI spending.

Product-Led Growth (PLG) Share of AI Spend

In AI, PLG dynamics are reaching enterprise scale faster and further than traditional SaaS. Cursor achieved $200 million in revenue before hiring its first enterprise sales rep. n8n built its business on broad adoption in the open-source community, only formally signing contracts after hundreds of employees became active users. ElevenLabs, Gamma, and Wispr Flow* have scaled similarly.

Developers and technical teams are particularly receptive to this trend. Many first discover tools for personal use, then prove their value in daily work, creating bottom-up demand that eventually converts into enterprise contracts. Lovable, OpenRouter*, and fal all follow this pattern, where informal use by product managers and engineers转化为 enterprise agreements once these tools are embedded in development workflows.

Startups vs. Incumbents: New Entrants Win in AI Applications

In the AI application layer, startups are far ahead. According to our data, this year for every $1 of revenue startups gain, incumbents gain only $1—startups hold 63% of the market share, up from 36% last year when incumbents still dominated.

Theoretically, this shouldn't happen. Incumbents have established distribution channels, data moats, deep enterprise relationships, large sales teams, and strong balance sheets. In practice, however, AI-native startups are outpacing larger competitors in some of the fastest-growing application categories.

Product and Engineering (71% startup share): Code generation is a prime example of startups winning. GitHub Copilot was the pioneer with all structural advantages, but Cursor has captured significant market share by shipping better features faster—leading in repo-level context, multi-file editing, diff approval, and natural language commands. Cursor's model-agnostic approach allows developers to adopt frontier models like Claude Sonnet 3.5 from day one without being constrained by Microsoft's partnership choices. This product velocity created a PLG flywheel: Cursor won over individual developers, who then championed it for enterprise use.

Sales (78% startup share): AI-native startups like Clay and Actively win by attacking workflows that Salesforce doesn't control: research, personalization, and enrichment, which rely heavily on unstructured signals outside the CRM system (the web, social media, email). By owning these channels outside the CRM and expanding downstream, they become the AI layer where sales reps actually interact—short-term replacing the traditional system of record and potentially becoming the system of record itself in the long run.

Finance and Operations (91% startup share): In heavily regulated areas like finance, incumbents like Intuit QuickBooks face extremely high accuracy requirements that limit their ability to ship native AI workflows. While the total size of this area remains small, this stagnation created opportunities for startups like Rillet, Campfire, and Numeric to go down-market and build AI-first ERPs with real-time automation and intelligent workflows—ultimately winning because incumbents couldn't ship reliable next-gen products quickly enough.

The following chart shows how this dynamic varies across enterprise departments, each with its specific functional tools. Teams facing fragmented, data-intensive workflows suitable for automation are often the first to adopt AI. In areas where reliability, integration depth, and dependence on existing systems outweigh the benefits of rapid iteration, incumbents still hold the advantage.

Departmental AI: Startup vs Incumbent Market Share

AI startups thrive in agile departments like market research, sales, marketing, and product. Incumbents maintain advantages in IT and data science, where reliability and deep integration are more important than speed.

As we go deeper into the tech stack, the picture changes. At the infrastructure layer, the situation is more complex. According to our data, incumbents hold 56% of the market share because many AI application developers still choose the data platforms they have trusted for years. While emerging AI-native infrastructure companies like Temporal, Supabase, Neon*, and Pinecone* have achieved impressive growth, incumbents like Databricks, Snowflake, MongoDB, and Datadog have also achieved equally significant growth—because even emerging AI-native application developers still primarily choose incumbent platforms to manage data, orchestrate workflows, and monitor operations.

Startups excel in applying AI, but trail incumbents in AI infrastructure.

Startups dominate in AI applications, earning nearly $2 for every $1 incumbents earn, while enterprise infrastructure spending still favors incumbents.

AI Applications: A $19 Billion Market

By 2025, the application layer is expected to receive $19 billion in investment, accounting for more than half of total generative AI spending. This spending is divided into three categories:

Departmental AI ($7.3 billion), built for specific job roles like software development or sales;

Vertical AI ($3.5 billion), targeting industries like healthcare or finance;

Horizontal AI ($8.4 billion), improving productivity across all functional departments.

Departmental AI: Coding Is Generative AI's First "Killer App"

By 2025, spending on AI across departments will reach $7.3 billion, a 4.1-fold increase year-over-year. Coding spending is the most prominent, reaching $4 billion (55% of departmental AI spending), making it the largest category in the entire application layer; the rest covers IT (10%), Marketing (9%), Customer Success (9%), Design (7%), and HR (5%).

Departmental AI Spend by Category

Coding has become the hottest application scenario in departmental AI. Investment is concentrated in areas with the most direct impact: Product and Engineering teams currently account for the vast majority of spending.

As models reach economically significant performance levels, code has become AI's first true "killer app"—Anthropic's Sonnet 3.5 triggered the first breakthrough in this space in mid-2024. Subsequently, AI adoption spread rapidly; today, 50% of developers use AI coding tools every day (rising to 65% in the top quartile of organizations). The code completion market size has grown to $2.3 billion, while code agents and AI app builders have exploded from near zero. Teams report that as they adopt AI tools across all stages of the software development lifecycle, development speed has improved by more than 15%: from prototyping (Lovable) to code refactoring (Open Hands*), design-to-code (Weaver), quality assurance (Meticulous*), code commits (Graphite*), site reliability engineering (Resolve), and deployment (Harness*).

AI Coding Spend: 2024 vs 2025

The sharp growth from $550 million in 2025 to $4 billion reflects a shift in capabilities: models can now explain entire codebases and execute multi-step tasks. Coding has transformed from a single solution to end-to-end automation.

While coding accounts for more than half of departmental AI spending ($4 billion), the technology is rapidly spreading across numerous enterprise departments. IT operations tools spending reaches $700 million, with teams using AI to automate incident response and infrastructure management. Marketing platform spending reaches $660 million, driven mainly by content generation and campaign optimization. Customer success tool spending reaches $630 million, with AI handling ticket routing, sentiment analysis, and proactive customer outreach. These categories all target repetitive workflows, aiming for immediate and measurable productivity gains. The following chart shows the players emerging in various functional areas, competing for the enterprise's $7.3 billion investment in departmental AI.

Menlo Ventures Departmental AI Market Map

AI-native startups are rapidly emerging across various job functions and are expected to capture a significant share of the $7.3 billion departmental AI spend in 2025.

Vertical AI: Healthcare Leads Adoption

In 2025, investment in vertical AI solutions reaches $3.5 billion, nearly three times the $1.2 billion invested in 2024. By industry, healthcare alone accounts for nearly half of all vertical AI spend—about $1.5 billion, more than tripling from $450 million the previous year, exceeding the sum of the next four vertical industries.

Vertical AI Spend Category

Total investment across verticals reaches $3.5 billion this year, nearly triple last year. Healthcare investment reaches $1.5 billion, accounting for 43% of market share, exceeding the combined investment of the other four vertical sectors.

The healthcare industry has moved slowly, long constrained by lengthy procurement cycles and regulatory resistance. But over the years, as administrative burdens have increased, profit margins have shrunk, and personnel shortages have persisted, healthcare systems have become one of the strongest demanders of AI automation in the entire economy.

Most spending is concentrated in administrative and clinical workflows, most notably ambient medical scribes (Ambience Recorder). By 2025, the medical scribe market size will reach $600 million (a 2.4-fold year-over-year increase), birthing two new unicorns (Abridge and Ambience) and a market leader in DAX Copilot from Nuance. Since clinicians spend an average of 1 hour on documentation for every 5 hours of patient care, if medical scribes can reduce documentation time by more than 50%, it significantly alleviates the administrative burden on doctors, allowing them to focus on the most important work within their practice.

For a deeper analysis of how AI is changing healthcare, see our "2025 State of AI in Healthcare Report" to understand adoption trends, budget changes, and areas where health systems have achieved significant ROI.

Beyond healthcare, AI is beginning to permeate almost every sector of the economy. Led by companies like Eve*, the legal industry market size has reached $650 million; creator tools market size reaches $360 million; government market size reaches $350 million. AI adoption is strongest in industries previously underserved by software: those that relied on manual labor, lacked structured workflows, and depended on human services, which can now be automated through generative AI. The following chart shows companies building businesses in these areas and vying for market share that has flowed into the $3.5 billion vertical AI space this year.

Menlo Ventures Vertical AI Market Map

The vertical AI sector is expected to reach $3.5 billion by 2025, three times last year's investment. These companies indicate that native AI software is rising to serve all sectors of the economy.

Horizontal AI: Copilot Spend Dwarfs Agent Spend

The horizontal AI market size reaches $8.4 billion, remaining the largest and fastest-growing category in the application layer, growing 5.3-fold year-over-year. Within this, Copilots dominate with an 86% market share ($7.2 billion), led primarily by ChatGPT Enterprise, Claude for Work, and Microsoft Copilot. Agent platforms like Salesforce Agentforce, Writer, and Glean account for another 10% market share ($750 million), while personal productivity tools like Granola and Fyxer account for the remaining 5% market share ($450 million).

Horizontal AI Spend by Category

General-purpose copilots currently dominate, but as agent capabilities become more powerful, we can expect a shift from assisted driving to autonomous driving.

AI Infrastructure: $18 Billion for "Picks and Shovels"

Our data shows that infrastructure layer spending will reach $18 billion in 2025, accounting for half of total generative AI spending, doubling from $9.2 billion in 2024. This spending is divided into three categories:

Foundation Model APIs ($12.5 billion) power the intelligence behind all AI applications.

Model Training Infrastructure ($4 billion) enables frontier labs and enterprises to train and fine-tune models.

AI Infrastructure ($1.5 billion) manages the storage, retrieval, and orchestration of data that connects LLMs with enterprise systems.

LLM Market Share: Anthropic Extends Enterprise Lead

This year, the foundation model landscape saw a decisive shift, with Anthropic surprisingly overtaking OpenAI to become the enterprise leader. We estimate that Anthropic currently holds 40% of enterprise LLM spending, up from 24% last year and 12% in 2023. During the same period, OpenAI's enterprise market share nearly halved, falling from 50% in 2023 to 27% in 2025. Google also achieved significant growth, with its enterprise market share rising from 7% in 2023 to 21% in 2025. Together, these three companies account for 88% of enterprise LLM API usage, with the remaining 12% split among Meta's Llama, Cohere, Mistral, and numerous smaller vendors.

Enterprise LLM API Usage Market Share

These views collectively illustrate the shift in the LLM ecosystem. The stacked bar chart shows the magnitude of market share changes, trend lines highlight the momentum of leading vendors, and coding share emphasizes the path to competitive advantage.

Anthropic's rise is driven by its enduring dominance in the code market, where it currently holds an estimated 54% market share, compared to OpenAI's 21%. This figure is up from 42% six months ago, largely thanks to the popularity of Claude Code.

In fact, Anthropic has maintained a nearly unrivaled 18-month lead on the LLM code generation leaderboard, starting with the release of Claude Sonnet 3.5 in June 2024. In mid-November 2025, Google released Gemini 3 Pro, whose official model card shows that Gemini 3 Pro ranks at the top of most major benchmarks—except for SWE-bench Verified, where Gemini 3 Pro still trails behind Claude Sonnet 4.5. Just a week later, Anthropic further extended its lead with Claude Opus 4.5, which set new records for code generation performance and once again cemented Anthropic's leadership in this category.

Open Source Models: Enterprise Adoption Lags Broader Ecosystem

Despite slowing momentum this year, Llama remains the most widely used open-source model in the enterprise. However, the model's stagnation—including the lack of any major new version releases since Llama 4 in April—has caused the overall share of enterprise open-source software to drop from 19% last year to 11% currently.

Change in Open Source LLM Enterprise Market Share

Enterprises remain cautious, preferring closed-source models. Open source lifecycle management (LLMs) currently accounts for only 11% of the market share, but developers are constantly pushing boundaries, running domestic models in production and testing new architectures at scale.

Although Chinese open-source models have made impressive progress this year and are increasingly popular among startups, enterprises remain cautious. Overall, they account for only 1% of total LLM API usage (accounting for about 10% of enterprise open-source).

Outside the enterprise, adoption looks vastly different. vLLM and OpenRouter* are two common benchmarking tools for startup and independent developer usage, showing that usage of Qwen, DeepSeek (V3, R1), Moonshot/Kimi, MiniMax, and Z AI's GLM is rising rapidly, although DeepSeek's usage has slowed after an initial surge following the R1 release.

Smaller models like Qwen3 and GLM are favored because their performance rivals that of larger peers. For example, Airbnb uses Qwen models extensively in its user-facing AI features, while Cursor uses the model as the open-source base for its internal model.

Monthly changes in open source LLM token processing share indicate increasing popularity of Chinese models in the broader developer ecosystem.

The distribution of open source LLM tokens continues to evolve, with Chinese models gaining significant traction among developers over the past year.

AI Infrastructure: The Modern AI Stack, Still Under Development

Despite the widespread discussion around "agents," the actual production architecture remains surprisingly simple: only 16% of enterprise deployments and 27% of startup deployments qualify as true agents—systems where LLMs can plan and execute actions, observe feedback, and adjust their behavior—while most systems are still built around fixed sequences or routing-based workflows centered on single model calls. Custom patterns also reinforce this technical immaturity. Prompt design remains the mainstream technique, followed by Retrieval-Augmented Generation (RAG). More advanced methods—fine-tuning, tool calling, context engineering, and reinforcement learning (RL)—remain niche, used primarily by frontier teams.

AI Architecture in Production

Putting the hype aside, most "AI agents" are actually just basic if-then logic around model calls. This simple architecture meets current application scenarios but also reveals that we are still in the early stages.

Because LLM-based application architecture is still evolving, the modern AI tech stack is overall not too different from last year. To date, the biggest beneficiaries have been incumbents that have expanded trusted data and infrastructure platforms, such as Databricks, Snowflake, MongoDB, and Datadog.

On the other hand, startup activity is concentrated in inference and compute, where AI-native vendors compete directly with hyperscaler developer platforms. Inference platforms like Fireworks, Baseten, Modal, and Together stand out for performance and developer experience—they offer serverless, high-throughput, open-weight endpoints and achieve more than 2x acceleration through hand-written or fused kernels, optimized serving stacks, and strictly managed GPU clusters.

Higher up the tech stack, a new generation of observability and tooling vendors, including LangChain, Braintrust, and Judgment Labs, are building the runtime observability layer for AI and stepping in through development workflows like evaluation, tracing, and continuous learning. The following chart shows the key players building the foundational layers that power generative AI applications.

Modern AI Stack: Building Blocks of Generative AI

Enterprises have invested $18 billion in AI infrastructure, covering foundation models, training systems, and the data and orchestration layers. This market map shows the companies serving each part of this tech stack.

What’s Next? Predictions for 2026

In 2025, AI will become the fastest-growing software category in history. Based on our observations of the entire ecosystem, we make five predictions for the coming year.

1. AI will surpass humans in day-to-day real-world programming tasks.

There is no bottleneck to the development of machine learning skills, especially in verifiable areas like mathematics and programming, where the best models keep improving.

2. Jevons Paradox still holds.

Despite inference costs falling due to an order-of-magnitude increase in inference volume, net spending on generative AI continues to grow.

Benchmarks are becoming saturated, but they no longer fully reflect model effectiveness in real-world applications. Models that simply chase the highest benchmark scores cannot retain users in the long run.

For frontier application scenarios like coding, users are actually less price-sensitive and more willing to pay higher prices for performance.

Models have found widespread adoption in an important application scenario beyond programming.

3. Explainability and governance become mainstream.

As agents gain autonomy and decision-making capabilities, the ability to explain and govern their decisions will become increasingly important, driven primarily by demands from AI users. We expect governments to require explainable decision processes and audit logs of agent execution results. Companies like Goodfire* working to make neural networks interpretable and controllable will become increasingly important to enterprises.

4. Models eventually migrate to the edge.

Driven by low latency, privacy/security, and other factors, computing will continue to move to the device side, and the cost of an increasing number of non-frontier models will approach zero. Mobile device manufacturers like Google, Apple, and Samsung will launch dedicated low-power GPU computing modules, allowing users to achieve fast inference on their phones without the need for a network and at no cost.

Final Thoughts

Two years ago, when generative AI was largely confined to pilots and proof-of-concepts, we released our first "State of Generative AI in the Enterprise" report. We sought to substantiate the situation with real data from actual enterprise buyers, analyst forecasts, and vendor expectations. We wanted to find truly valuable information amidst the noise.

The results of this year's study make it clear that this transformation is no longer speculative. The enterprise AI market size has now reached $37 billion, making it the fastest-growing area in software history. Across every industry, AI has become a core component of work. Seeing tangible returns, enterprises are increasing their investment.

We are honored to work with the many companies driving this change: frontier model providers leading the coding transformation, security platforms securing enterprise AI at scale, and teams redefining applications in vertical industries like healthcare, legal, finance, and education. These teams are setting new standards and laying the foundation for the next wave of innovation.

This comprehensive transformation has been underway for three years. While still early, the first leaders are emerging, and the value is evident. If you are a founder on the frontier of the industry, we look forward to connecting.

Menlo Ventures is all in on AI. Let's build the future together.

Data Sources and Methodology

Survey Methodology

This report combines the results of a survey conducted with an independent research firm from November 7 to 25, 2025, of 495 U.S. enterprise AI decision-makers. Respondents include C-level executives, VPs of Engineering and Product, and technical leads responsible for AI procurement and development decisions at companies actively using AI tools.

Market Size Model:

Our market size estimates combine survey data from enterprise AI decision-makers with an analysis of the generative AI ecosystem. We categorize companies by industry, type (startup vs. incumbent), and go-to-market strategy (selling to individuals vs. enterprises), and estimate the revenue distribution in the AI space by referencing public information, industry reports, and market analysis.

Scope:

Generative AI spending includes foundation models, model training infrastructure, AI infrastructure, and AI applications from startups and incumbents. It does not include chips (e.g., Nvidia), inference and model serving (e.g., AWS, GCP, Azure, Fireworks), and AI features built into existing software solutions (e.g., Intuit Assist).

LLM Market Share:

LLM market share represents spending estimates based on the proportion of production API usage. Survey respondents reported the percentage of each model used in their AI workloads. All responses are weighted according to the size of each enterprise and startup application and verified where public financial data is available.

Limitations:

Market forecasts represent our best assessment as of December 2025. The survey sample is limited to U.S. enterprises. Revenue forecasts for private companies are based on public data and industry analysis.

1. Reuters, "Nvidia Breaches $5 Trillion Market Cap", October 29, 2025, https://www.reuters.com/business/view-nvidia-breaches-5-trillion-market-cap-2025-10-29/; Companies Market Cap, "Nvidia Market Cap", accessed December 2025, https://companiesmarketcap.com/nvidia/marketcap

2. MIT MLQ AI, "State of AI in Business 2025 Report", 2025, https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

3. Generative AI spending includes funds for foundation models, model training, AI infrastructure, and AI applications from startups and incumbents. Note that this market size does not include revenue from chips (e.g., Nvidia), inference and model serving (e.g., AWS, GCP, Azure, Fireworks), or AI features built into existing software solutions (e.g., Intuit Assist). For detailed methodology, please see the "Methodology" section.

4. Forecast generative AI spending of $1.7 billion in 2023 and $11.5 billion in 2024, but this forecast excludes the inference portion, which was previously included in Menlo Ventures' "State of Generative AI in the Enterprise" reports for 2023 and 2024.

5. Menlo Ventures, "2024: The State of Generative AI in the Enterprise", November 20, 2024, https://menlovc.com/2024-the-state-of-generative-ai-in-the-enterprise/

6. Aaron Chatterji, Tom Cunningham, David J. Deming, Zoe Hitzig, Christopher Ong, Carl Shan, and Kevin Wadman, "How People Use ChatGPT", NBER Working Paper No. 34255, September 2025, https://doi.org/10.3386/w34255

7. Menlo Ventures, "2024: The State of Generative AI in the Enterprise", November 20, 2024, https://menlovc.com/2024-the-state-of-generative-ai-in-the-enterprise/

8. Menlo Ventures, "2025: The State of AI in Healthcare", October 21, 2025, https://menlovc.com/perspective/2025-the-state-of-ai-in-healthcare/

9. LLM market share approximates spending amounts based on the proportion of production API usage. Survey respondents reported the percentage of each model used in their AI workloads. Responses were then weighted by the size of each enterprise and startup application, and the results were triangulated with publicly disclosed financial data.

10. Google DeepMind, "Gemini 3 Technical Report", November 2025, https://storage.googleapis.com/deepmind-media/Model-Cards/Gemini-3-Pro-Model-Card.pdf

11. SWE-bench, "SWE-bench Verified Leaderboard", accessed December 2025, at: https://www.swebench.com/

12. Anthropic, "Claude 4.5", December 2025, https://www.anthropic.com/news/claude-4-5

13. Based on public adoption metrics from vLLM (https://github.com/vllm-project/vllm) and model usage share from OpenRouter (https://openrouter.ai), data collected November to December 2025.

14. Bloomberg, "Chesky Says OpenAI Tools Not Ready for ChatGPT Integration in Airbnb App", October 21, 2025, https://www.bloomberg.com/news/articles/2025-10-21/airbnb-ceo-brian-chesky-says-chatgpt-integration-not-ready-for-airbnb-app

15. KrASIA, "Coding tools Cursor and Windsurf found using Chinese AI in latest releases", November 6, 2025, //kr-asia.com/coding-tools-cursor-and-windsurf-found-using-chinese-ai-in-latest-releases; Al Jazeera, "China's AI is quietly making big inroads in Silicon Valley", November 13, 2025, //www.aljazeera.com/economy/2025/11/13/chinas-ai-is-quietly-making-big-inroads-in-silicon-valley

Tim Tully

Tim is a Partner at Menlo Ventures, focused on early-stage investments that reflect his passion for AI/machine learning, the new data stack, and next-generation cloud computing. His experience as a technology builder, buyer, and seller also influences his investments in companies like Pinecone, Neon (acquired by Databricks), Edge Delta, JuliaHub, and TruEra.

Joff Redfern

Joff, a self-described "tall, slightly nerdy product guy," formerly served as Chief Product Officer at Atlassian, where he led its acclaimed product portfolio, including Jira, Confluence, and Trello. During his tenure, Joff was named to Products That Count's list of "Global 20 CPOs" and recognized as one of the "20 Top Product Managers."

Deedy Das

Deedy is a Partner at Menlo Ventures, focusing on early-stage investments in AI/machine learning, next-generation infrastructure, and enterprise software. Having worked as an engineer and product lead at successful startups and large public companies, he helps technical founders understand how to build and scale sustainable tech companies.

Derek Xiao

As a Partner at Menlo Ventures, Derek focuses on early-stage investments in AI, cloud infrastructure, and digital health. He works with companies from seed to transformation, including Anthropic, Eve, Neon, and Unstructured. Before joining Menlo, Derek worked at Bain & Company, advising tech investors on investment opportunities in areas like machine learning.


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.