State of AI Report: A Critical Look Beyond the Hype
Some business thoughts about ICONIQ's 2025 Builder's Handbook.
ICONIQ Capital issued a "State of AI Report" that offers some genuinely useful insights into LLM adoption patterns and future trends. At the same time, its framing reveals more about venture capital priorities than market reality.
While there's solid data here worth examining, the report's central narrative deserves scrutiny from anyone making real AI investment decisions. For example, the above chart illustrates that tech companies are building agentic workflows first while new AI model development seems to have cooled off a bit. This is true whether a survey respondent was from an “AI native” company or an “AI-enabled” one.
As a reference point, ICONIQ defines AI native as a company whose "entire value proposition is architected around generative intelligence", e.g., AI startups. An AI-enabled company is retrofitting AI into existing business models, either by integrating it into existing products or by introducing new ones. Basically, AI-enabled represents everyone who is not an AI startup, at least in my mind.
The agent development focus may be true, but that doesn’t mean actual buyers of AI products are procuring agents or that agents have achieved a level of veracity that’s good enough for widespread adoption yet. As Cobus Greyling noted in his most recent article, we’re not seeing performance levels above 60% with agents.
Source: Cobus Greyling
So, while improving, AI “native” startups are building these new tools for the future, and have some needed improvements. I suspect they will be the first to rush these products out, but that doesn’t mean mature “enabled” companies are lagging. They likely have existing businesses to defend, and introducing an unstable product is one way to lose that customer base.
Let’s look a little further into ICONIQ’s report…
Developer Challenges: The Technical Reality Check
Beyond the positioning, the report reveals important truths about what developers actually face when implementing AI systems. The top deployment challenges highlight persistent technical limitations:
Hallucinations (39%)
Explainability and trust (38%)
Proving ROI (34%)
Reliability and business fit remain major challenges for LLM-based AI implementations. It’s hard to justify the acquisition of software without being able to trust it and unable to show clear value for the implementation. The report cited Salesforce's own research illustrating Agentforce achieves only 58% success rates on single-turn tasks, dropping to 35% for multi-turn interactions, as we have here in the past as well
These aren't minor hurdles—they represent fundamental reliability issues that limit the applicability of AI in mission-critical scenarios. When enterprise AI tools perform inconsistently, companies face a choice: Accept the risk of unreliable automation or invest heavily in human oversight that reduces the promised efficiency gains.
The Multi-Model Shift: OpenAI's Declining Dominance
The study shows AI companies are using an average of 2.8 different AI models for their software, suggesting the market is moving beyond single-vendor solutions. This diversification makes strategic sense—different models excel at different tasks, and vendor diversification reduces dependency risks. Further, each LLM has its own strengths and weaknesses.
The data shows meaningful competition emerging:
OpenAI maintains leadership but not dominance
Anthropic, Google, and Meta are gaining ground
Anthropic is now used in 50% of all full-stack, vertical, and horizontal apps, showing its progress as a strong alternative to OpenAI.
Cost-effective options like DeepSeek and Mistral are gaining market share, challenging the cost structures of more expensive AI
41% of companies are exploring open-source models primarily for cost control
This trend toward multi-model architectures suggests AI is becoming a commoditized infrastructure rather than a sustainable competitive differentiator. This illustrates a more mature market with optionality, both for function and costing, which should translate to a more realistic adoption cycle. It also conflicts with the "AI-native advantage" narrative. Mature companies are dictating market needs and are seeking alternatives from the early market leader.
The Economics Nobody Wants to Address
One aspect of the AI bubble that is becoming a dominant thread is the return on investment (ROI). That includes investors for startups, public buyers of AI-centric stocks, enterprise buyers of AI software, and consumer companies. However, the high cost of AI is providing objections, and without downward cost pressures.
While celebrating productivity gains, the report reveals those concerning cost dynamics. Some key findings:
70% of companies identify API usage fees as their hardest infrastructure cost to control
High-growth companies spend twice as much on inference costs as peers
Monthly AI infrastructure costs can reach $2.3M at scale
Internal AI productivity budgets are doubling in 2025
These numbers represent real economic challenges. When AI operational costs exceed the promised value of cost savings and optimization, the business case becomes questionable. The end result is forcing AI companies to lower the costs of their API calls, licensing fees, and SAAS pricing models. But it might not end there. Better product design, marketing, and sharper value propositions that align with actual outcomes are necessary.
Internal Productivity: Real Gains with Real Requirements
That being said, the results are readily available. Productivity data shows that companies that operationalize AI are reporting 15-30% efficiency improvements across various AI use cases, with coding assistance showing the strongest impact (65% of respondents rank it as their top productivity driver).
However, these gains aren't automatic. The most successful implementations require:
Process redesign rather than simple tool adoption
Training humans to collaborate effectively with AI systems
Accepting iteration cycles as AI capabilities mature
Building organizational curiosity rather than expecting plug-and-play solutions
Again, AI does not work out of the box, and that puts the onus on executives in corporate America to not only procure AI but also further develop and tailor solutions to fit their culture. Verticalization and domain-specific AI solutions just aren’t there yet.
What Business Leaders Should Actually Focus On
So what’s a business executive to do? The good news is that stripped of venture capital optimism, the data suggests several practical priorities that they can use to manage AI procurement, implementation, and adaptation in their enterprise. There are several clues offered in the report that can help steer clear.
Cost Management is Critical: Early AI adopters focused on capability demonstrations. Successful scaling requires disciplined cost control and clear ROI measurement. It also requires piloting and letting go of implementations that don’t measure up to internal success expectations.
Platform Integration Wins: Despite the "AI-native" narrative that dominates the report (more on that below), 68% of companies rely on existing vendors for their AI capabilities. This approach minimizes implementation risk while leveraging established workflows. It also enables brands to recruit talent to help tailor AI models to their specific needs.
Curiosity Drives Success: Organizations seeing meaningful results aren't just buying AI tools—they're systematically exploring specific problems, testing boundaries, and iterating when initial attempts fall short. We’ve written about that here on the CognitivePath recently; check out that article here.
Hybrid Approaches Work: The most practical AI deployments enhance human capabilities rather than attempting full automation. This approach delivers measurable benefits while managing the technology's current limitations. In the end, though tech companies like to showcase their AI layoffs to their Wall Street investors, humans ultimately run AI systems. Training people to do so makes sense.
The "AI-Native vs AI-Enabled" Framing Problem
As noted in the introduction, the report positions "AI-native" companies (32% of respondents) as fundamentally superior to "AI-enabled" companies (69% of respondents). This is to ICONIQ’s advantage as a native AI-first company, primarily a venture capital-backed startup. According to ICONIQ, AI-native companies reach scale faster (47% vs 13%) because they're "structurally better equipped" for AI transformation.
This distinction is problematic for several reasons.
For example, by this definition, classifying Salesforce—which processes over a trillion AI predictions daily and generates billions in AI-related revenue—as merely "AI-enabled" while treating newer startups as "AI-native" champions misses the forest for the trees. We’re talking the developer of Agentforce, the brand that literally ushered in the agentic AI boom. Or what about Google and Gemini? Or Microsoft and its approach to AI with OpenAI investments and Copilot (not to mention its agentic frameworks)? Then there is Meta and Llama.
What matters isn't your founding date relative to the AI boom or how much of your product set is pure AI. Rather, we should be discussing how effectively you integrate AI into your value proposition.
The framework effectively presents ICONIQ's portfolio of newer companies as more promising, while positioning established players as legacy operations. Frankly, many AI native companies lack a clear market value proposition, which is best indicated by their current size and lack of market position.
Market reality suggests that companies with existing customer relationships, proven business models, and domain expertise often have significant advantages in AI deployment, even if they didn't start with AI as their core technology. Big tech has the upper hand just based on sheer resources. It’s easier to join the party late, either through acquisition or development, and maintain market dominance than it is to displace leading companies.
The Bigger Picture
ICONIQ's report contains valuable data about AI adoption patterns, technical challenges, and economic realities. There is a quite a bit of useful information available on resourcing AI and utilizing chief AI officers to lead AI development.
The companies achieving real AI value aren't necessarily those that started with AI—they're the ones approaching it with realistic expectations, systematic experimentation, and patience for the iterative process that successful technology adoption requires. The AI transformation is real and accelerating. However, a sustainable competitive advantage will come from thoughtful implementation and organizational learning, not from rushing to acquire technology for its own sake.
However, the report’s central thesis about AI-native superiority reads more like investment thesis validation and valuation hot air than objective market analysis. To be honest, I was surprised that the report was as candid as it was about some AI problems. Perhaps it is a sign that Silicon Valley is getting down to brass tacks and addressing at least some of the AI boom’s more troubling hype-based problems.
What do you think of the state of AI development?
Great analysis as always