4 Must-Knows in AI

STATE OF THE ART
The AI Industry’s State in 4 Parts

Today, we are looking at four overlooked insights that better explain the real state of AI.

Specifically, we will be looking at the following:

  1. AI Adoption, truth or myth? A more detailed look at AI spending, the thermometer of AI hype, and while incumbents claim incredible demand, client data tells a different story.

  2. What nobody is saying about China’s accelerator progress. China’s latest chip developments, chips vs systems, Thorium-based molten-salt nuclear reactors and why the obsession with ‘the chip gap’ proves most analysts are clueless.

  3. The Rebirth of Nuclear. Why a massive electricity event last week will change AI’s energy future.

  4. An honest look at the real progress of the ‘I’ in ‘AI,’ helping us cut through the noise.

Leave if you only care about the headlines. Stay if you want to understand the real state of the industry.

Let’s dive in.

AI Spending & Adoption. Truth or Myth?

This week, the remaining four Big Tech companies had their quarterly earnings days, during which they presented their financial results over the last three months, and the results have been overwhelmingly positive.

But the question here is: what about AI spending?

Hyperscalers Continue to Hyperscale

Let’s look at the five main AI spenders.

  • Amazon (AWS) led the pack, reporting a substantial $25 billion in CapEx for Q4 2024, driven primarily by AI and AWS cloud infrastructure, a striking 67% year-over-year increase.

  • Microsoft (Azure) followed closely, reporting CapEx of $21.4 billion in Q1 2025, representing a 53% year-over-year increase.

  • Alphabet (Google Cloud) also increased CapEx substantially, jumping to $17.2 billion in Q1 2025. Alphabet also maintained guidance of around $75 billion for 2025—willingness to spend that number,

  • Meta (Facebook) similarly surged its investment, reaching a record $13.69 billion in CapEx during Q1 2025. It also maintained its guidance for the year at $72 billion.

  • Oracle, while not spending nearly as much as its Hyperscaler friends ($2.6 billion), had the most ambitious guidance in last month’s earnings report, claiming that it intended to double AI CapEx to $16 billion, double what it spent in 2024.

Now, not all that spending is AI-focused. Still, non-AI investments are almost a rounding error, even though many of these companies try to prevent jumpscares by claiming that much of this CapEx is oriented on supporting their cash cow businesses.

But how have these companies justified such huge spending figures?

The common pattern has been, using Safra Catz’s, Oracle’s CEO, words, that demand is “dramatically outstripping supply.”

Simply put, they are investing in compute because they cannot meet current demand levels for their AI products and services. If we take these claims at face value, AI products and services should be experiencing a Cambrian adoption explosion.

But is that the case? Well…

At first, there are several reasons to believe this claim:

  1. Amazon powers Anthropic’s demand for the company’s AI models. It’s also integrating Claude into Alexa, it’s smarthome product going by the name of Alexa+, meaning that Alexa users are quite possibly inundating Claude’s models with requests in their daily interactions with the smart home system. Additionally, according to both Cursor and OpenRouter rankings, Claude models are consistently amongst the most used LLMs, especially for coding.

  2. Microsoft powers OpenAI’s demand (although not fully anymore, as another player is doing that, too), OpenAI alone has 800 million monthly active users as of last month, so that’s already a massive demand source. Also, Microsoft is deploying its AI models across all its enterprise and consumer digital products, mainly through the Copilot brand, which, whether you like it or not, is everywhere on your screen if you use Microsoft Office.

  3. Google has successfully rolled out Gemini to hundreds of millions of users of its search engine (it reportedly has 350 million users per month), and it’s also seeing increased usage of its models for coding via AI-integrated development environments (code editing software) like Cursor or Windsurf. Also, as Microsoft, they are putting Gemini everywhere in Google’s enterprise software suite whether you want it or not.

  4. Meta has successfully rolled out LlaMa models across several social media apps, mainly WhatsApp and Instagram, offering features like audio transcription. Additionally, it has released the MetaAI app, although we’ll have to wait and see its effect.

  5. Oracle is now a fully-fledged OpenAI compute provider. The biggest output of their partnership, through the Stargate Project, is the gigawatt-scale Abilene (Texas) data center, which is well under construction (200 MW will be online soon), so they are seeing (and will see) growing demand coming from OpenAI over the next quarters.

Overall, things look great. But if we push our analysis further, cracks appear.

The Story We Are Being Told Is Only Half True

I wouldn’t go as far as to say they are being dishonest about the real reasons behind their massive AI investments, but there are signs than they aren’t telling us the entire picture.

On the good side, we actually have some numbers to ramble on. Microsoft mentioned in the past quarter that its revenues from AI products and services reached a $13 billion annual runrate.

Other companies, like Google, mentioned “billions in revenue” from the last quarter alone from AI but did not disclose any particular number. Moreover, Meta was adamant about the crucial role AI is playing in helping its core business make money by increasing time spent on its apps (pretty much like TikTok uses AI to guarantee you end up doomscrolling).

But the overall sensation they give me is that they aren’t fully transparent.

For instance, there are reasons to believe that Microsoft is actually seeing weaker-than-expected demand for its Copilot business, which Claude (Amazon) could also be guilty of. The reasons are two: user interest trends and, more crucially, Microsoft’s own actions.

Despite the exciting numbers that were shared, the overall interest in Microsoft’s flagship AI product, Copilot, is very poor compared to names like ChatGPT or Gemini:

Additionally, Microsoft has acknowledged overcommitment to AI spending. Despite its steadfast guidance of AI spending at $80 billion this year, they have canceled around 2 GigaWatts worth of LOIs (letters of intent) with providers over lease contracts for the upcoming years.

In layman’s terms, they had reached lease agreements with providers in years’ time that they’ve decided to back off from.

This is quite telling of underwhelming demand. Moreover, Amy Hood, Microsoft’s CFO, cited this week that they would rebalance the spending mix.

“Spending will grow at a lower rate than FY 2025 and will include a greater mix of short-lived assets, which are more directly correlated to revenue than long-lived assets.“

Amy Hood

But what does this tell us?

If you really expect “infinite” demand, you are going to spend most of your money on securing as much land as you can; the more land, the more data centers can be built.

When demand expectations are that high, making those investments profitable (actually building the data centers) takes a backseat because you are more concerned about not running out of space in the future.

Conversely, if you think demand is going to be less than expected, you are going to prioritize deploying the data centers.

And what they are doing is precisely the latter. Instead of committing more money to securing more land, they will spend more money on the actual hardware that makes money, the GPUs, moving forward.

In other words, that means they feel they have enough space to meet current and future demand.

If we recall last week’s analysis on GPUnomics, GPUs are revenue-generating assets because they simply process and generate tokens, which are then directly charged to customers for a margin.

This paints an underwhelming picture for Microsoft. Worse, out of the recognized revenue, it’s unclear how much is simply a 20% cut of OpenAI’s topline, which we know is part of the Microsoft/OpenAI deal.

Although accounting practices will allow Microsoft to recognize the cut on OpenAI’s revenues as operating revenue, to me, it doesn’t count because it hides the poor traction around Microsoft’s real products.

But this is just Microsoft, and the other Big Tech companies could be having booming AI businesses, right?

Demand And Revenues Don’t Have to Be the Same.

Other companies like Google or Meta seem to be firing on all cylinders. Although neither of them puts a revenue number to their AI products, their advantage is that integrating AI into their core businesses is much more straightforward.

Specifically, they both can (and are) citing AI as a key driver of higher usage of their core businesses (Internet search and social media apps, respectively).

And based on their earnings results, it’s hard to dispute that.

Or is it? 

While we are seeing claims of hundreds of millions of active users, billions of queries per day, and so forth, all that obviously translates to ‘huge demand,’ but how much of that demand is making money or, worse, even remotely monetizable?

If we look at the demand-driving products these companies have, well, they are mostly free products.

  • We know ChatGPT has less than a 5% conversion rate (with some estimates putting the value at 0.96%, although I believe that’s wrong), meaning that, at best, 760 million of their monthly active users are free users, aka zero-revenue users.

  • Google’s explosive user growth is based on AI overviews, their Gemini-powered summaries on top of every search. I use AI overviews heavily daily but have yet to pay a single dollar for the service.

  • Alexa+ drives Amazon’s growing Claude usage. Yet, customers have not seen an increase in price because of the integration; Amazon is eating the loss of running the models.

  • Both Microsoft and Google’s increased usage of AI services through their enterprise suites was not an opt-in feature; they have pushed AI down our throats through their bundled offering. The user may know it’s using AI, but it has no choice.

  • Google and Meta’s growing usage via their core products also means they are pushing AI down our throats either way. Worse, many people aren’t even aware they are using AI.

In other words, it seems that all these companies are artificially inflating their AI businesses by forcing clients to pay for them or embedding them into their core products and hoping investors buy the idea that AI leads to more usage.

You may accuse me of being overly cynical, but let me give you an example. Since AI overviews are available in my Chrome browser, the number of clicks I make on any particular website has dropped dramatically.

And you know how this business works, right? Companies pay to be on top of the search results so people click to open their websites. But if AI Overviews provides the answer, what’s the point?

Naturally, you can argue that AI Overviews incentivize me to use Chrome over other browsers, but it’s nonetheless impacting whether I see what advertisers want me to see.

For all those reasons, it’s still very early to claim victory, as I believe that

1. most demand is either self-generated, imposed, or, worse, not monetizable, and

2. It’s actually pretty unclear whether AI is actually good for their core businesses... or actually bad.

Either way, as investors latch onto any sign of AI-driven revenues, it’s unsurprising that Meta and Google are reaping the major benefits in their stocks; the narrative that these non-monetizable investments are being indirectly monetized by strengthening the core products is better than nothing.

However, perhaps more concerning, most of this usage is consumer-based, not enterprise-based. Enterprise usage remains the big unanswered question.

While, on paper, we are seeing record-setting adoption rates by enterprises, with most companies using AI in some shape or form, reality is much grimer:

  1. Numbers aren’t adding up. As reported by the Wall Street Journal, companies are struggling to make a return on their AI investments. Despite record-setting investment (almost $40 billion), only 1% acknowledge scaling investment or being mature, and 43% are still in the pilot phase, with most C-suite executive saying adoption is being executed ‘very slowly’.

  2. There is tremendous noise around AI in general, with conflicting reporting. While Capgemini reports overwhelming support for AI-assisted shopping, KPMG hilariously reported the exact same insight… but saying the EXACT opposite.

  3. Boards are not prepared. A report by Deloitte showed how most boards are not ready for the change. “Over three-quarters of respondents (79%) say their boards have limited, minimal, or no knowledge or experience with AI.members were completely clueless on AI.” Not the best of signs.

To summarize, in my honest view, there are several reasons to be skeptical about the true nature of AI adoption:

  • Hallucinations are getting worse with reasoning models, making these products less attractive to enterprises. I believe most of AI growth in enterprises will continue to come from supervised learning, aka traditional machine learning, which do not require accelerators (GPUs) or foundation models, so there’s zero revenue upside for these companies there.

  • Data security continues to be a crucial requirement. Many companies, like Palo Alto Networks, have turned to open-source models like Deepseek to run their models. It’s still revenues for compute providers, but lower margins.

To summarize, investors seem to buy the adoption narrative for now, but the reality seems far less extraordinary than what these CEOs and CFOs are telling us.

But these half-truths are nowhere near as bad as what you’re being told about China. Let’s fix that now.

Subscribe to Full Premium package to read the rest.

Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.

A subscription gets you:

  • • NO ADS
  • • An additional insights email on Tuesdays
  • • Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more