- TheWhiteBox by Nacho de Gregorio
- Posts
- Hot Takes for the AI Investor
Hot Takes for the AI Investor


The Simplest Way To Create and Launch AI Agents
Imagine if ChatGPT and Zapier had a baby. That's Lindy. Build AI agents in minutes to automate workflows, save time, and grow your business.
Let Lindy's agents handle customer support, data entry, lead enrichment, appointment scheduling, and more while you focus on what matters most - growing your business.

FUTURE
Hot Takes for the AI Investor
Over the last months, I’ve developed very strong opinions on the future of labor, geopolitics, markets, and of AI itself.
And I’m just going to dump it all on you. Four big topics will be addressed today:
The geopolitical maneuver the US is clearly preparing between AI, crypto, and the dollar
How AI could cause the downfall of the “best business model in history”, search
A thorough analysis of AI impact on businesses and what the real moat will be,
And a simple framework, the knowledge-to-action ratio, to measure AI exposure for every job.
This is not me suggesting stock picks; this is for people who want to understand where AI is going, to make savvy investment decisions, or for those who simply want to learn key insights you’ll hardly read anywhere else.
Let’s dive in.
This century’s Petrodollar
In 1971, the US abandoned the gold standard (Nixon Shock).
From then on, the dollar was no longer backed by gold, raising concerns about its global reserve currency status. To stabilize demand for the dollar, the US struck a deal with Saudi Arabia in 1974: in exchange for military protection and arms, Saudi Arabia (and eventually other OPEC nations) agreed to price all oil exports exclusively in US dollars.
In simple terms, this meant any country wanting to buy oil had to hold and trade in dollars. As oil was fundamental (and pretty much still is) to most, if not all, nations, it created consistent global demand for the currency, hence the rise of the ‘petrodollar’.
The appetite for US dollars has cemented the US’s dominance in the world, and still does. Yet, oil, despite its strong importance to this day, is seeing a steady decline in interest as the world moves to new technologies.
Thus, the US dollar needs a new peg.
And the answer is AI, but how? Well, using a mixture of two things: crypto and compute. If this sounds too weird, trust me, it will make sense in a minute.
AI Compute, The Ultimate Commodity
In the Trump Administration, David Sacks holds the highest authority in both AI and Crypto. Well, I believe the fact that both fall under the same umbrella is no coincidence.
If AI has to be the substitute to the petrodollar, for that to be, that would mean that most of ‘AI units’, people buying AI in some shape or form, are paid in dollars.
But what does it mean to ‘buy AI’?
For most people, the primary interaction will be with AI products or services, which may originate from various sources and be paid for in multiple currencies. The US can’t stop people from buying an Austrian AI agent for customer support if they are living in, say, France, using euros.
But if there’s one thing in common across all AI products, it is that they need compute to work. And here’s where the US should aim to guarantee its dominance.
The logic is clear. If you own most of the compute on the planet, eventually, anyone who wants to use AI will be increasing the demand for dollars.
While you may pay for the product in euros or rubles, the company behind that product will be either buying compute from a Hyperscaler or LLM provider, most if not all US companies, or serving the models from their data centers (highly unlikely due to the complexity of managing them).
But even in the latter case, the servers, chips, cooling, and so on, will mostly come from US companies, such as NVIDIA, AMD, and Supermicro, among others. So, at one point, every single AI transaction carries demand for the US dollar.
At least, that’s the plan. But the US seems to be willing to take it a step further.
Stablecoins’ vital role in AI
Another technology the US Government seems to be trying to push the adoption of to secure US dollar demand is stablecoins.
For those not familiar with the blockchain, it’s essentially a decentralized ledger, and by decentralized, it means that, unlike a company ledger, where there’s a single source of validation, in the blockchain, transactions are validated by a network of distributed validators (‘miners’ in the case of Bitcoin).
The idea is that it’s very hard, or outright impossible, to tamper with a properly decentralized blockchain because you have to ‘fool’ or ‘take over’ a large number of nodes distributed across the world. For instance, the Bitcoin blockchain has yet to be tampered with a single time.
To pay for their efforts, validators are paid in a cryptocurrency attached to that blockchain, Bitcoin for the Bitcoin blockchain, or Ethereum for the Ethereum blockchain.
This is important to note because it means the cryptocurrency has actual value but only if the blockchain is used, which tells you all you need to know about the “value” most cryptocurrencies have. Yes, beyond speculation and arbitrage, basically zero.
As cryptocurrencies are speculation-driven and very volatile, there’s a need for ‘stablecoins’, which are cryptocurrencies that, instead of fluctuating with supply and demand, are ‘pegged’ (and backed 1:1) to a fiduciary currency. This allows crypto users to move their gains to a more stable currency without having to leave the blockchain (a taxable event).
In countries with a weak currency, stablecoins are massively used as a store of value, as they are a way to have your money ‘soft pegged’ to the underlying fiduciary coin (usually, the dollar).
So, what does all this have to do with AI?
Simple, it’s clear that US strategists not only want to ensure that at some point in the ‘purchasing chain’ of AI compute there are dollars being transacted, but they even want to “eliminate borders” by allowing the French company we mentioned earlier, buying an Austrian AI product, to pay using stablecoins instead of their native currencies.
I know, this particular case, with a strong euro, may be hard to achieve, but besides the Eurozone, and maybe a couple of other areas with strong currencies, the temptation to transact with USD-backed stablecoins will be huge.
This way, the US guarantees not only a presence on the compute level, but also on the product level, making the US dollar, directly or indirectly, the main transaction currency for AI.
The Bullish and the Bearish
The strategy looks pretty good, but there are also reasons to be bearish. On the positive side:
The US has the overwhelming lead in chip design, the fact that the US dollar is already the world’s reserve currency (massive head start), and also consolidates an indecent amount of total AI compute investment, with this year well over $300 billion with just four players, and next year projecting to be above $400 billion.
The US is also strengthening ties with the Middle East, another region with significant investment potential. The AI Compute Summit, attended by several key AI leaders and Trump, was a huge victory.
On the negative side:
AI models could soon be hardware-specific. With China leading the open-source race (confirmed with GPT-OSS on Wednesday), we could soon see massive Chinese AI adoption unless the US reignites its inclination to open-source development. GPT-OSS is just one step on the right path, but hardly enough. In the event models do become hw-specific, leading Chinese AI models could lead to an appetite for Chinese chips and servers like Huawei’s CloudMatrix. In that case, is the US’s exuberant dominance in computing guaranteed?
Tariffs are less of a financial issue and more of a trust and relationship problem. We are already seeing European countries like mine look further to the East. I believe Trump underestimates the power of trust, and the fact that the EU chickened out in the negotiations doesn’t mean the EU likes to be pushed around.
The Future of “The Best Business in History”
If you were to ask most businesspeople what the best business in history is, most of them would point to search, aka Google. Power-concentrating (huge network effects), making it almost impossible to compete in (Microsoft, despite being Microsoft, failed miserably), and extremely high-margin.
However, the once golden goose of Big Tech is under severe scrutiny with the advent of Generative AI, as these models allow us to just ask what we want to know and the AI searches it for us and returns a well-sourced response back, in a matter of seconds.
A Declining Business Model?
The first issue with AI search is that it may disincentivize its very product: content.
As we saw a while back in this newsletter, several media outlets, some of them commanding huge authority for many years, are seeing a very concerning drop in site visits:

Source: Wall Street Journal
Besides the business case itself, the first question is whether the ad search business will even be viable if content stops being generated. At the current pace, we have two big issues with growing importance:
Content creators are less incentivized to write as revenue drops month by month
The ad model for chatbots is all but clear, as these models are non-deterministic machines (we can’t really predict outputs) meaning that there isn’t a clear way to determine, for both advertisers and chatbot companies how “$x will lead to y clicks”.
To me, problem one has no immediate solution, and we are about to see a collapse of human-written content, at least that which is generated as a means to make a living.
So if chatbots run out of content to reference, what’s the point?
Of course, these models capture a decent portion of that knowledge, but would you trust an AI model without citations? Cause I wouldn’t.
One possible solution would be for other AIs to take over the content generation duties, but that would imply a huge hit on overall trust, because AIs can say basically anything their users want to. Furthermore, humans despise AI-generated content, and I don’t see how that will change anytime soon.
GPT-5 content feels, again, robotic as they get. I don’t see human emotion in those words, a problem that might never be solved.
But for the majority of investors, they aren’t concerned in the slightest with this, and just assume that a new, AI-led search market will be born. Thus, the question here is who will win, the current search business adapted by Google to the new AI, or incumbents like OpenAI or Perplexity.
But I beg to differ; I don’t think that is the correct framing. To me, the question is whether search, as a business, has a future at all, because I’m not sure AI-search is actually monetizable.
Let me explain.
Does it even make sense?
I don’t think investors have really thought this for long enough, or assume that OpenAI or Google ‘will find a way’. But solving monetization for AI-search is actually hard, very hard, because we are dealing with an unpredictable model.
It has:
Unpredictable cost per inference due to varying token input and response lengths, so you simply have to approximate an average cost. Instead, traditional search queries have predictable costs.
Inference costs are not only hard to predict at the sequence level, but they are also hard to predict at the GPU level.
Unpredictable search calls. The model will use natural language queries to search the API, meaning that, although you control the index, you can’t really tell which sources will be provided to the model.
Models may hallucinate even when having sources to ground on, citing the sources wrongly.
Among other stuff. But let’s dive deeper into the cost side to understand the two degrees of freedom in costs: sequence level and GPU level.
The sequence level is straightforward; the longer the input sequence, or the output response, or both, the costlier the inference will be. It’s unpredictable, yes, but at least we know how to solve it: decrease per-token costs.
But this is way easier said than done, introducing us to the second degree of freedom, GPU-level unpredictability, which is much, much trickier.
When a model is given a sequence, it performs two phases:
Prefill. The model processes the entire sequence, applying the attention mechanism (the technique at the heart of most modern models, which makes words in the sequence ‘talk to each other’ for the model to grasp what the input says) to the entire sequence, to generate the ‘KV cache’, so that future redundant computations can be avoided. This part ends when the first token is generated.
This is the reason the first token in the response takes longer than the rest, it requires more compute. For more detail on the KV Cache, read {💾 The KV Cache} (Only Full Premium Subscribers)
Decoding. The following tokens follow the same procedure, but the redundant computations are retrieved from memory, making the following predictions faster than the first one.
In the former, GPUs operate in their ideal format: with minimal data movement, the compute cores are crunching operations at high speed (what GPUs are designed for).
But during the decoding phase, which is the overwhelming majority of the time in AI inference, as we have a cache to handle, data is being moved in and out of the compute chips in huge numbers.
Every unit of time a GPU is moving data and not crunching numbers, it’s losing money, or consuming energy without providing value. This is the arithmetic intensity dilemma we have covered multiple times, where you want to maximize the amount of time GPUs are making computations instead of moving data.
The crux of the issue is that predicting this is alchemy for AI engineers, there are simply too many variables: GPU availability, model size, sequence size,… there’s really no good way to put a cost number on this.
So, when it comes to predicting the costs of your inferences, well, you can’t.
Knowing this, it’s becoming much harder to define a price for advertisers. To make the system attractive to buy into, you will have to guarantee inferences per dollar (i.e., for every dollar you pay, I guarantee 10 inferences will mention your product).
But the question is: how many inferences will I, me as in Google or OpenAI, need to guarantee those 10 inferences that are actually charged?
You can’t simply bake the product into the model’s response, so the number of inferences I’ll need to monetize that advertiser is dynamic. It's certainly not the same to need 20 inferences to guarantee the 10 ‘chargeable’ inferences than requiring 1,000 inferences, as I’m losing money on 990 of them.
So, how on Earth do you make this work?
The most likely route for these companies, for now, is the use of subscriptions, a choice made by Perplexity, OpenAI, and Google.
But as we discussed in our news rundown post for Premium subscribers, subscription-based models make zero sense in AI.
Yes, they ‘work’ for now because it’s very early. Companies can ‘hide’ the different tricks they use to make costs manageable (like downgrading you to a cheaper model when you’re too expensive or demand is too high), but this won’t be tolerable in a hyper-competitive, low-barrier market, I predict AI will soon be. Usage-based, or even success-based pricing, will be the norm.
But in that scenario, guaranteeing advertiser attribution while making search a profitable business is going to be tough as nails.
Nobody knows the answer to this problem, yet most advertisers are starting to look at AI-led search as the future of advertising, assuming OpenAI/Google/Perplexity figure it out with examples like Perplexity’s revenue-sharing model.
Undeniably, the future looks in that direction. But while I could be wrong and there is a way, I can guarantee the days of high-margin search ad business are gone (as with any high-margin digital business).
This is why I believe Google is “silently”—it was abruptly clear in their last Google I/O—transitioning to an all-things-Gemini business model, doing AI-based products and services for consumers at scale. I wouldn’t be surprised if, internally, they are writing off their entire search business (or most of it) to close to zero and will instead use AI-search as a volume, low-margin business that pays the bills.
I know this is a very hot take, but that’s the point of this entire post.
Bullish and Bearish
On the good side, Google is sufficiently diversified (not so much at the margin level, which is still search-heavy).
Importantly, it has the deepest and broadest AI capabilities of all companies on this planet, huge compute availability (to drop GPU costs), and can use a—still growing—search business for distribution of the products that will one day substitute search as the main cash cow.
On the negative side, I don’t think Perplexity will survive. They don’t have the compute, nor do they have Google’s talent and decades-old GPU engineering expertise, nor OpenAI’s global awareness.
All this takes us to the next important question. If AI is tumbling down the barriers to competition, while commoditizing at the same time, in an AI future, what is the new moat?
And the answer may surprise you.

Subscribe to Full Premium package to read the rest.
Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In.
A subscription gets you:
- • NO ADS
- • An additional insights email on Tuesdays
- • Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more