AI's Next Investment Cycle: The Focus on Real Applications

Exploring how AI applications drive profitability amid high infrastructure costs and competition in the AI industry.
S
Surya
5 mins read
From GPUs to use cases: Why AI’s real money is in applications
Not Started

1. Context: AI Industry at a Profitability Crossroads

The artificial intelligence (AI) industry has entered a decisive phase where the central question is no longer technical feasibility but economic sustainability. After years of heavy investment in data centres, chips, and foundation models, the industry is reassessing whether these investments can generate durable profits.

In 2025, global spending on AI infrastructure reached $320 billion, reflecting confidence in long-term potential but also creating high fixed costs. Foundation model providers face thin margins due to high inference costs and intense competition that suppresses pricing power.

This transition matters for governance and economic policy because AI is increasingly seen as a general-purpose technology with economy-wide spillovers. If profitability challenges persist, innovation could become overly dependent on venture capital rather than market demand.

The economic logic is that technological breakthroughs must eventually align with viable business models. Ignoring this transition risks misallocating capital and slowing productive innovation.


2. Infrastructure-Centric Growth Model: Limits and Risks

Foundation model businesses have scaled rapidly but struggle to convert scale into profits. Despite reaching 13billioninannualisedrevenuebyAugust2025,OpenAIreportedlyincurredlossesof13 billion** in annualised revenue by August **2025**, OpenAI reportedly incurred losses of **5 billion in 2024, highlighting the cost pressures of compute-intensive models.

High infrastructure and inference costs eat into revenues, while competition among model providers keeps prices low. Much of the current expansion is sustained by venture capital and strategic corporate funding rather than operating profits.

This model is inherently fragile. If capital inflows slow, firms relying solely on infrastructure-heavy approaches may face consolidation or exit, with implications for competition and innovation diversity.

From a development perspective, capital-intensive growth without clear profitability creates systemic risk. Ignoring cost–revenue mismatches can lead to abrupt market corrections.


3. Shift Towards AI Applications: Evidence of Market Demand

In contrast to infrastructure, AI applications demonstrate clearer demand-side validation. In 2025, businesses spent $19 billion on AI applications, accounting for over 50% of generative AI spending and more than 6% of the total software market, achieved within three years of ChatGPT’s launch.

Market traction is evident: Statistics:

  • At least 10 AI products generate over $1 billion in annual recurring revenue.
  • Around 50 products earn over $100 million annually.

These figures indicate that firms are no longer merely experimenting with AI but embedding it into routine operations. Applications translate abstract AI capabilities into productivity gains and revenue streams.

The governance insight is that adoption follows utility. Without application-layer value, infrastructure investments fail to translate into broad-based economic gains.


4. Investment Patterns: From Technology to Customers

Investor behaviour reflects this shift. By Q3 2025, there were 265 private equity deals involving AI applications, a 65% year-on-year increase, with 78% being add-on acquisitions for existing portfolios. Strategic M&A deal values in AI rose by 242% compared to the previous year.

A notable example is Meta’s 2billionacquisitionofManusinDecember2025.Manusreached2 billion acquisition of Manus** in December 2025. Manus reached **125 million in annual revenue within nine months by offering a task-oriented AI agent, demonstrating both technical capability and commercial viability.

These trends show that investors now prioritise real customers and cash flows over purely technical milestones.

The investment logic is that sustainable innovation attracts capital when it solves concrete problems. Ignoring customer-centric metrics risks speculative bubbles.


5. Where Value Is Concentrating: Departmental and Vertical AI

Real value creation is concentrated in departmental AI, particularly coding tools. In 2025, coding applications accounted for 4billionofthe4 billion** of the **7.3 billion departmental AI market, making them the largest segment.

Adoption is deepening: Impacts:

  • 50% of developers use AI coding tools daily.
  • Usage rises to 65% in top-performing firms.

Foundation model competition also reflects application dominance. Anthropic captured 40% of enterprise LLM spending in 2025, up from 12% in 2023, largely by leading in coding applications (54% market share), while OpenAI’s enterprise share declined.

The development lesson is that applications pull infrastructure adoption. Ignoring sector-specific use cases limits diffusion and productivity gains.


6. Profitability Outlook: Applications Versus Compute

According to Morgan Stanley, generative AI achieved a 34% contribution margin in 2025, its first profitable year, with projections of 67% by 2028 as infrastructure costs decline. However, these gains accrue mainly to firms offering end-to-end solutions, not raw compute.

Circular financing obscures true demand. For example, a significant share of reported cloud AI revenue stems from discounted internal spending among strategic partners, covering costs rather than generating surplus.

Applications break this loop by generating external revenue, validating demand beyond ecosystem recycling.

The fiscal logic is that genuine profitability requires independent demand. Ignoring circular financing risks overestimating sectoral health.


7. Policy and Regulatory Implications

As foundation model providers move into applications, competition concerns intensify. Dominant firms combining infrastructure and applications may crowd out independent developers. Mergers, especially acqui-hires, risk reducing innovation and labour mobility.

Other governance challenges include:

  • Challenges:
    • Copyright disputes over training data.
    • Privacy risks from AI agents accessing sensitive information.

Policymakers are cautioned against premature overregulation. The application layer needs experimentation space, while competition oversight—especially merger review—remains essential.

The regulatory logic is balance: under-regulation risks monopolisation, while over-regulation stifles innovation. Ignoring either distorts market evolution.


Conclusion

The AI sector’s trajectory mirrors earlier technological revolutions: infrastructure enables potential, but applications deliver value. As profitability shifts toward solution-driven models, policy, investment, and innovation must realign around real-world use cases. Long-term economic gains will depend on fostering competitive, application-led ecosystems that translate AI capability into broad-based productivity and growth.

Quick Q&A

Everything you need to know

The shift from AI infrastructure to AI applications signifies a maturation of the artificial intelligence ecosystem. In the early phase, enormous investments flowed into foundational layers—data centres, GPUs, cloud infrastructure and large language models—under the assumption that scale itself would generate value. While this phase proved that AI works technically, it also exposed a key weakness: infrastructure-heavy AI businesses struggle to achieve profitability due to high inference costs and intense competition that suppress pricing power.

AI applications, by contrast, sit closer to end-users and business workflows. They translate abstract AI capabilities into tangible productivity gains—such as faster coding, automated customer support or clinical decision assistance. The fact that businesses spent $19 billion on AI applications in 2025, accounting for more than half of generative AI spending, demonstrates that demand has moved beyond experimentation into routine deployment. This mirrors earlier technological revolutions, where value ultimately accrued not to infrastructure providers but to application builders.

Conceptually, this shift highlights an important economic principle relevant for policy and governance: technology adoption becomes sustainable only when it solves real problems. Just as the internet was monetised through applications like e-commerce and search rather than bandwidth alone, AI’s long-term viability depends on use cases that deliver measurable outcomes, not merely computational scale.

Foundation model providers face a structural profitability challenge because their cost base scales almost linearly with usage. Training and running large models requires continuous spending on GPUs, energy and cloud infrastructure. Even when revenues grow rapidly—as seen with OpenAI’s $13 billion annualised revenue—the high cost of inference and ongoing model upgrades erode margins, leading to sustained losses.

Competition further aggravates this problem. As more firms release comparable models, prices for API access are driven down, limiting pricing power. In addition, much of the reported revenue in the AI ecosystem comes from circular financing, where companies buy compute from cloud providers who are also their investors or partners. This obscures true market demand and delays financial discipline.

From a broader economic perspective, this illustrates why infrastructure-led growth alone is unsustainable. Without downstream applications generating independent demand, foundation models risk becoming commoditised utilities. This dynamic reinforces the argument that profitability and long-term innovation are more likely to emerge from application layers rather than from foundational technology alone.

Meta’s 2billionacquisitionofManusillustrateshowvalueintheAIsectorisincreasinglybeingcapturedbycompaniesthatdeliver<strong>focused,outcomeorientedapplications</strong>.Manusdidnotbuildanewfoundationalmodel;instead,itdevelopedanAIagentthatperformedconcretetasksefficiently.Withinninemonths,itachieved2 billion acquisition of Manus illustrates how value in the AI sector is increasingly being captured by companies that deliver <strong>focused, outcome-oriented applications</strong>. Manus did not build a new foundational model; instead, it developed an AI agent that performed concrete tasks efficiently. Within nine months, it achieved 125 million in annual revenue, proving both product-market fit and scalability.

This acquisition signals a shift in strategic priorities among large technology firms. Rather than investing only in infrastructure or models, they are seeking to acquire applications that already have paying customers and embedded workflows. Similar logic underpinned acquisitions like ServiceNow–Moveworks and Nvidia’s purchases of AI startups, all of which emphasised customer value rather than raw computing power.

For policymakers and aspirants, this example underscores a critical insight: innovation ecosystems thrive when startups solve narrow but impactful problems. The Manus case also offers lessons for countries like India, where building applied AI solutions tailored to local sectors—healthcare, agriculture or governance—may yield greater economic returns than attempting to compete at the foundational model level.

Departmental AI refers to AI tools designed for specific functions such as coding, customer service, legal research or human resources. Its rapid growth—coding tools alone accounted for 4billionofa4 billion of a 7.3 billion market in 2025—reflects strong demand for targeted solutions that integrate seamlessly into daily workflows. By automating routine tasks, departmental AI significantly boosts productivity and reduces error rates.

However, this shift also raises concerns. On the positive side, such tools augment human capabilities rather than fully replacing workers, enabling professionals to focus on higher-value tasks. On the negative side, uneven access to AI tools could widen productivity gaps between firms and countries, potentially reinforcing existing inequalities in the global economy.

From a policy standpoint, the challenge lies in managing this transition responsibly. Governments must encourage skill upgradation and digital literacy while ensuring that AI adoption does not lead to excessive job displacement. The rise of departmental AI thus represents both an opportunity for inclusive growth and a test of adaptive governance.

Investor confidence has shifted because applications provide clearer visibility into revenue streams, customer retention and profitability. Unlike infrastructure-heavy firms, application companies can demonstrate value through subscription models, recurring revenues and measurable productivity gains for clients. This explains the sharp rise in private equity and merger-and-acquisition activity centred on AI applications in 2025.

Another reason is the declining marginal returns from infrastructure investments. As foundational models become widely available, differentiation increasingly depends on how effectively AI is embedded into business processes. Applications break the cycle of circular financing by generating revenue from external customers rather than from within the AI ecosystem itself.

This shift also reflects a broader market correction where fundamentals—cash flow, growth rates and sustainability—matter more than narratives. In this sense, AI is transitioning from a speculative phase to a more disciplined, market-driven phase of development.

Governments face a delicate balancing act. On one hand, excessive regulation at an early stage could stifle innovation in the application layer, which still requires experimentation and failure to discover viable business models. On the other hand, unchecked consolidation—where large firms acquire or neutralise potential rivals—can harm competition and long-term innovation.

Competition policy therefore becomes central. Regulators must closely scrutinise mergers, especially acqui-hires that eliminate emerging competitors without delivering consumer benefits. At the same time, evolving issues such as copyright in training data and privacy in AI agents require adaptive regulatory frameworks rather than rigid rules.

For India and other developing economies, the priority should be to encourage domestic AI applications aligned with national needs while ensuring fair competition. The broader lesson is that AI governance must evolve alongside markets, ensuring innovation thrives without undermining economic fairness or democratic accountability.

Attribution

Original content sources and authors

Sign in to track your reading progress

Comments (0)

Please sign in to comment

No comments yet. Be the first to comment!