THEWHITEBOX
TLDR;

Welcome back! This week, I have a lot of news for you. Several new products (awesome ones and also, ironically, bad ones), new debt stories to keep you awake at night, market data, and others that will make you think deeply about the world we are moving towards.

Enjoy!

ASSISTANTS
ChatGPT and Claude get Interactive superpowers

Announced by OpenAI a few days ago and now by Anthropic, they have both added in-chat visual generation: they can now automatically create charts, diagrams, tables, and other interactive visuals within a conversation when they think it would help, or when you explicitly ask for them.

In Claude’s case, these visuals appear inline in the chat, unlike Claude’s older “artifacts” feature, which opens in a side panel (which seems to be ChatGPT’s case still, as seen in the thumbnail). The new visuals are more temporary and can change or disappear as the conversation evolves. The feature is rolling out to all users and is enabled by default.

TheWhiteBox’s takeaway:

Both apps are slowly becoming defaults for everything. They can literally help you with almost anything you want to do on your computer that involves text, code, and, soon enough, creative work, as Google is massively pushing for.

Eventually, the entire software stack will be built on top of these models, an idea I have called the ‘LLM Operating System’ in the past. It was more than a year ago, but it feels closer and “truer” than ever.

MODELS
Google Releases Embeddings 2

Google has released a new model, Gemini Embeddings 2. The main idea is that it places text, images, video, audio, and documents into a single shared embedding space, rather than handling only text embeddings. It supports more than 100 languages.

According to the piece, Gemini Embedding 2 can process up to 8,192 text tokens, up to six images, videos up to 120 seconds, native audio, and PDFs up to six pages.

TheWhiteBox’s takeaway:

But what does that mean? Picture a model that can semantically unify meaning across several data modalities. A text describing a penguin, an image of a penguin, and a video of a penguin will all be assigned the “same value.”

And why would you need that? Simple example: search. Imagine a product that can search images based on text descriptions. Sounds a lot like what Google Image Search does, right? Well, surprise, that’s exactly what is going on under the hood.

But this model takes it a step further by incorporating other modalities, meaning you can search for video with video, images with text, audio with video, and video with text… You get the gist.

WAR
AI, Under The Spot for Target Attacks

There’s mounting concern that AI’s role in warfare is much more than simply making plans.

The issue is that it seems AIs may have played a role in particular attacks, especially for targeting objectives. I won’t indulge in speculation about whether Claude or any AI participated in any particular attack, because I don’t know, but the idea that AIs could be used for such tasks is indeed concerning.

As you probably guess, I’m not particularly fond of Dario Amodei, but if there’s one thing he was correct to point out, it's that AIs are not ready to make autonomous decisions about which target to hit. They may suggest options, but under no circumstances should they decide which option.

The problem with these AIs is that they are probabilistic; by nature, they are somewhat random. I’ve talked in the past about this randomness not only being about the software adding randomness (which is done but can be removed), but also about the distributed nature of AI workloads in GPU clusters, which introduces other factors that make the final output slightly different every time.

Without that proper predictability, there’s always a non-zero chance the model makes an unpredictable mistake. Furthermore, the model can make an “honest” mistake too, misidentifying the target.

But wait, can’t humans make mistakes too when doing this kind of targeting? Sure thing, but here’s the difference: humans can be held accountable for their mistakes; AIs cannot be held accountable. We’ve known this for decades; if not, ask IBM engineers, the godfathers of much of the computing we have today, what they had to say back in 1979.

If that was the case, and the AI mistakenly attacked the school, which we don’t know and probably never will, that’s extremely reckless. What the IBM guys asked for in 1979 should be enforced, as no accountability guarantees recklessness.

DEBT
Salesforce, what is you doin’?

Amidst a sea of successfully oversubscribed corporate bond issuances from firms like Amazon or Oracle, both of which have issued debt lately with passionate support from bond investors, one company, Salesforce, hasn’t been as successful, and for very good reason.

I have recently shared my deep concerns about Oracle’s position here.

As you may have noticed if you’re a regular of this newsletter, SaaS companies are having a—predictable—bad time in markets. Suddenly, investors believe their businesses are going to get killed by AI.

This is kind of an exaggeration in some ways (though directionally right, in that they are going to be deeply transformed), but it surely doesn’t help Salesforce's case the weird decision to issue $25 billion in corporate bonds to… buy back shares.

It’s not like Salesforce is going to use that money to turn the boat around. They want that money to buy back shares, reduce the outstanding share count, and thus artificially increase the price.

TheWhiteBox’s takeaway:

I can buy into the idea that SaaS companies are considerably oversold relative to current performance, and another is to believe Salesforce feels like a company that can survive this ongoing crisis. It doesn’t.

If their solution is to go into debt to pump the stock price, that means they aren’t precisely in excess of ways or ideas to turn the boat around. Feels quite desperate, honestly.

I’ll probably speak about this in more detail in my portfolio section sometime in the near future, but this is how I personally see the SaaS market:

  1. I strongly believe the future of software is low margin. In my view, only SaaS companies that own the AI rather than outsourcing it to OpenAI et al. have a meaningful chance of higher margins. Additionally, most current SaaS companies don’t have “what it takes” to become low margin. Too bloated.

  2. I believe "Build vs buy" risk is very real, especially for new companies that aren't yet in the Salesforce/SAP ecosystems. Many customers will cannibalize your business from below. Not to compete with you, but to substitute you.

  3. The elephant in the room: migrations. Even if you can replicate the 20% of Salesforce functionality you need, migrating out of Salesforce is such a burdensome problem that it’s not even an option for most today. But migration tools could change that. IBM dropped 10% on the news that Claude could code COBOL (textbook example). The idea is simple: if AI tools can facilitate migration, it's game over for bloated bundle SaaS business models.

  4. I rarely see new customer-led growth mentioned on SaaS earnings calls (I haven't seen Salesforce mention it at all). Key metric, in my view: is revenue growth just from trapped-customer price hikes, or from newly acquired customers? I believe the former.

  5. Per-seat licensing is dead. Over time, customers will ask for fewer seats, so SaaS companies will eventually have to move to usage-based pricing. Seeing what SaaS companies are capable/will execute that transition is key.

  6. Human-facing vs agent-facing. Most software will be used by agents eventually, so tracking whether SaaS companies are leading the charge in this transition is also important (e.g., releasing agent CLIs, APIs, skills, MCPs...)

  7. The market transition from software to hardware feels inevitable. Building software is much, much easier now. On the other hand, try building Ultra-High Power lasers and co-packaged optical MEMS switches as Lumentum does, or 20-hi HBM4 stacks with 2,048 pinrate and high yields as Hynix does. While AI democratizes software, the AI hardware supply chain doesn't get more specialized and bottlenecked than it is right now.

AI STRUGGLES
Amazon’s ‘AI-generated’ outages

You’d better get accustomed to this. According to the Financial Times, Amazon is holding a large engineering review after a series of outages linked to AI-assisted coding changes across its retail and cloud systems.

One reported incident was a six-hour crash of Amazon’s website and app caused by a faulty deployment, and another was a 13-hour outage of the AWS cost calculator tied to changes made by Amazon’s Kiro AI tool.

Amazon says it is treating this as part of its normal reliability process, but internally it has described a recent pattern of incidents with unusually wide impact. The company is responding with tighter controls, including more senior review of AI-assisted code changes.

TheWhiteBox’s takeaway:

That coding agents are amazing, and that reckless use can hurt more than they benefit you, are two thoughts that can coexist. AIs can write great code, but when used in software-heavy environments, this needs a combination of two things:

  1. The human in the loop has to know what they are doing

  2. Code reviews should not go away.

I’ve read a lot of “experts” in Silicon Valley arguing we shouldn’t be reviewing code anymore; that there’s too much code to review. Well, maybe you should write less code and verify that it works before deploying!

We are going to see a ton of news on this topic, and AI will be put at blame with the typical “nah, it writes bad code.” But to me, this is entirely the Amazon dev teams’ fault, and goes back to our previous point on accountability: AI empowers, but it should also ensure those empowered are held accountable if things turn south.

DEBT
SoftBank, Another Problematic Debt Problem

If you thought the AI debt bubble was a ‘it’s just Oracle, guys’ story (by the way, markets don’t seem to care as much as the latter rocked earnings), you’re wrong.

Financed AI spending is everywhere, which puts Monday’s Leader post comments on how the war on Iran can impact AI into perspective: if it prevents interest rate declines (or worse, causes hikes), it will make debt much more expensive.

SoftBank is one such case that is heavily debt-heavy. With their bonds already well into junk territory, this is at the time of most need, as they are one of OpenAI’s key lenders.

As we mentioned last week, when we covered Oracle’s debt problems, bonds trading below par value (below their original price, suggesting people are selling them at a “loss” to recoup liquidity) isn’t necessarily a sign of default risks; it may simply reflect a cost opportunity, meaning those bonds are simply less desirable at that point in time.

Instead, what you should be looking at is credit default swaps (CDS), which are basically insurance on the bond. If you buy a CDS on the bond, if the bond fails, the insurer covers the cost.

Therefore, if the price of those CDS increases, it means the insurers are charging you more to take on the risk.

And as the thumbnail shows, SoftBank’s CDS swaps are growing like bread in an oven, almost perfectly negatively correlated with SoftBank’s stock price, reflecting the growing concerns about the future of one of AI’s biggest financiers.

This isn’t remotely the only case where we are seeing this. Another big “AI funder”, private credit, is struggling too, with perhaps no better example than Blue Owl Capital, down 40% on the year.

TheWhiteBox’s takeaway:

This is all warranted. The return on AI CapEx investment is all but guaranteed. As Morgan Stanley predicted, for AI investors financing the party to make money, AI has to generate trillions of dollars in revenue, several dozen times what we are generating today.

Mayayoshi Son, SoftBank’s CEO, is a diehard believer, to the point that it’s raising another $40 billion in debt to pay OpenAI, an unprofitable company that won’t be making money before 2030.

AI investing has become a matter of faith, which Mayayoshi seems to have a lot of.

CONSUMER
The ChatGPT App is Killing it

A partner in Altimeter, a venture capital company that invested in OpenAI, has shared some interesting insights on the state of OpenAI’s consumer business.

The key finding is that ChatGPT does not just have the most users; it also appears to have the most habitual users. The reported gap with rivals widens on engagement metrics, suggesting that competitors may attract attention but not the same depth of recurring use.

The percentage of monthly active users who are also daily active users is a key metric for measuring engagement.

Another key element to point out is retention. Not only does ChatGPT report higher user retention than other AI apps, but its retention has also improved even as it has scaled, while the others are seeing declining trends.

That is unusual for a fast-growing consumer product and suggests the app is becoming more useful over time, not less.

TheWhiteBox’s takeaway:

Which subscription should I really use, then?

I haven’t hidden my preference for ChatGPT. Kid you not, I’ve tested all there is to test, and for my particular task distribution (which doesn’t necessarily mean it's the best option for you), ChatGPT consistently met my expectations above all others.

Having now the Codex app too, which is my go-to for building software products, coupled with the fact that OpenAI has, by far, the best rate limits for its Pro subscription (I’ve still had to be rate-limited once), has locked me in for good.

Also, Codex allows you to connect agents to your subscription (meaning you don’t have to pay extra API costs to use Codex models as agents), making my $200/month subscription feel more than worth it.

I’ve been very tempted to add the Gemini Ultra subscription because of the sheer number of great products Google offers in that bundle, but I’ve found that my use of Gemini products (mostly Nano Banana and NotebookLM) is more than covered by the Gemini Pro subscription.

As for Claude, the products are great (more on their latest release below), but their rate limits are far from great. For what it’s worth, Anthropic is seriously ramping up compute in 2026 thanks to the Google TPU deal, so if you favor Claude, you should see better service over the year.

But for me, there’s no debate whatsoever on which option is best bang for your buck if you’re using these AIs to think or for knowledge work (they are also by far the smartest models when pushed to the max on compute).

  • If you’re more on the creative side, Gemini is probably your best option, considering they have Nano Banana and a plethora of creative apps like Pomelli.

  • If you're more on the “I just need a good agent with good compatibility with my software work ecosystem”, then I would give Claude a try, although OpenAI is seriously investing in this too with their ChatGPT for Excel product, which I covered already and is great.

CONSUMER
Which are the top GenAI Consumer Apps?

a16z’s latest ranking suggests consumer AI is broadening beyond standalone chatbots into mainstream software with AI built in. ChatGPT still leads comfortably, but the market is becoming more multi-product and more competitive.

The report’s main finding is that ChatGPT remains the clear number one on web and mobile, while Gemini and Claude are gaining traction, especially with paid users. That points to a market where one product still dominates, but rivals are becoming more credible.

It also finds that AI use is geographically fragmented, with Western, Chinese, and Russian products forming increasingly distinct ecosystems. On a per-capita basis, the strongest adoption is in places like Singapore, the UAE, Hong Kong, and South Korea rather than the US.

A third takeaway is that creative AI is shifting. Standalone image generators are losing ground as those features get absorbed into larger platforms, while video, music, voice, and agent-style products are gaining importance.

Overall, the report suggests consumer AI is moving from a first phase centered on chatbots to a broader platform phase, where the key question is not just who has the most traffic, but which products become the default environment for everyday use.

TheWhiteBox’s takeaway:

The highlights for me are three:

  1. All apps share one thing: iteration. Generative AI has cracked the code for iterative workflows, making it an undeniable asset for work that requires continued collaboration between humans and AI (knowledge work, creative work, coding, etc.).

  2. Automation tools are nowhere to be seen and for good reason (hint: it’s not that we don’t want to automate things, it’s that AI is too unreliable today).

  3. Google’s distribution is insane, with four entries on the list (Gemini, NotebookLM, Google AI Studio, and Google Labs).

I want to take the opportunity to highlight what’s barely here: Europe. As a Europeanwhere are the European companies? There are like two of them (Freepik and Lovable, with the latter being only half European because it's registered in Delaware), a statement on how badly Europe is missing the AI train.

With Volkswagen cutting 50,000 jobs, and Porsche losing 98% of its operating profits (yes, 98%), going from 5.5 billion euros to 98 million in a single year, and having surrendered our entire energy sovereignty to external powers (US, Russia, Algeria, and the Gulf), have European bureaucrats taking a single good decision in the last twenty years?

RESEARCH
Another Billion for World Models

Yann LeCun, formerly Chief Scientist at Meta, has announced a $1 billion funding round at a $3 billion valuation for AMI, its new start-up. Called Advanced Machine Intelligence, the idea behind this lab is to create world models, a particular type of AI models meant for one thing only: predict what happens next in a partially-observable environment.

Say you’re driving through a street with cars on both sides, and you suddenly see a ball passing through behind a car. Humans automatically slow down upon seeing that, because even though it’s not visible, there’s a high chance a kid is about to cross the road too after the ball, so you slow down to avoid running them over.

Here, your brain (i.e., your world model) was able to predict what would happen next despite not having full information. This is what a world model does, yet that type of prediction in high uncertainty is very, very hard for AIs.

Many in the industry believe current AIs can do that job, but LeCun disagrees. In his view, world models shouldn’t be generative, like ChatGPT, predicting the world by generating it. Instead, they need to be trained differently.

In other words, it’s a highly contrarian bet against the entire industry. But will it pan out?

TheWhiteBox’s takeaway:

This is the most European funding raise ever, instantly diluting yourself by a third. It just shows that European investors are not like US or Japanese investors; you can smell how high the risk meters are on this deal.

I will say that going contrarian is the only way to make sense of investing in an AI Lab in 2026. You don’t simply jump into the Generative AI bandwagon 10 years after OpenAI was born, or 5 years after Anthropic (unless you’re Elon).

I do believe AI is not solved, and that algorithmic improvements are required in key areas, and world models feel like the place where it makes the most sense.

CODING
An Insane Valuation if True

According to Bloomberg, Cursor is in talks to raise at a staggering $50 billion valuation. The company crossed $2 billion in annualized recurring revenue in February.

TheWhiteBox’s takeaway:

This can only make sense to me under two conditions:

  1. Cursor is legitimately killing it at the enterprise level, which is stickier and might still require IDE-type features (the capability to see the code the AI is writing).

  2. The data. If Cursor is capable of showing proof that all the user data they’ve gathered during these years can be applied to train their own AI models that are superior to what others offer.

And to be clear, both must be true.

If not, investors are out of their minds because Cursor has no way to compete at the consumer level with OpenAI/Anthropic and will never make money unless it trains and serves its own models.

HARDWARE
Meta’s New MTIA Chips

Meta is accelerating its in-house AI chip program with four MTIA generations in roughly two years, aiming to run ranking, recommendation, and generative AI workloads more cheaply and efficiently across products used by billions.

The key point is not just performance, but faster iteration: Meta wants a new chip about every six months so it can keep up with how quickly AI models are changing.

The first chip, MTIA 300, is already in production and focused on ranking/recommendation workloads. MTIA 400, 450, and 500 are planned through 2027, with later chips adding more memory bandwidth and pushing further into generative AI inference.

There are many things to say here.

TheWhiteBox’s takeaway:

The first thing to point out is the outrageous amount of HBM GBs per chip, pushing the value to half a TeraByte with the MTIA 500 chip. This isn’t new; every single 2027 accelerator promises HBM in the hundreds of GBs, surprising considering this is actually a very scarce asset.

Let me be clear: there isn’t nearly enough supply of HBM for all the HBM that is promised to us for next year, putting the Big Three of DRAM, SK Hynix, Micron, and Samsung in a beautiful position to do what they have become pretty good at: raising prices.

If AI CapEx persists, 2026 and 2027 will be historical years for the trio, with Samsung potentially becoming the most profitable company in the world by total value, simply unprecedented.

By the numbers, we can also assess how “behind” Meta’s chips are. The MTIA 400 chip is in the same ballpark as NVIDIA’s top chip today, the B300, suggesting that Meta is about 2 years behind NVIDIA.

But this also implies that all NVIDIA and AMD customers are trying to develop their own inference chips to avoid having to purchase them from these two. Every single Hyperscaler is doing the same thing.

As a consequence, both NVIDIA and AMD will have to release inference-only chips to deter customers from trying to avoid them.

And my gut tells me that next week, during NVIDIA’s GTC, we'll get Jensen's response to all these ASIC plays, hopefully with a presentation on what they have in mind for the Groq acquisition.

AGENTS
Microsoft Announces Copilot Cowork

Satya Nadella himself announced the release of Copilot Cowork, Microsoft’s own version of the Claude Code-esque agents that, instead of forcing them through a specific workflow, are true AI agents in the sense that you give them goals with “full” freedom of action.

The substance of the release is that Microsoft is trying to make Copilot operational, not just conversational.

Cowork is grounded in Microsoft 365 data, such as emails, meetings, messages, files, and spreadsheets, through what Microsoft calls Work IQ, and Microsoft says it is built to operate within the company’s existing security and governance controls.

A notable part of the launch is that Microsoft says the feature draws on technology developed with Anthropic and the system behind Claude Cowork.

It’s basically Claude Cowork but inside the Microsoft ecosystem.

TheWhiteBox’s takeaway:

I can’t understate how badly Microsoft is fumbling AI lately. Their Generative AI products are, point blank, terrible. I’ve been working with Copilot Studio over the past few weeks for client work, and I’m quite literally losing my mind.

It’s easily six months behind experiences like ChatGPT or Claude, I’m not kidding. Perhaps more!

Agents are slow (suggesting below-par inference infrastructure), and the harness is beyond terrible.

For example, when creating a simple agent with access to my Outlook calendar, if Copilot doesn’t find an event (because there are none, I was testing that particular use case), the model enters a doom loop of forever retries; like literally won’t stop retrying.

The agent is completely incapable of inferring that, maybe, there are simply no events to retrieve? I actually recorded the interaction so that you appreciate the doom loop; it’s literally the same call again and again.

How am I supposed to recommend this tool to the CEO I’m meeting next week? I’m supposed to be the AI bull!

Do you realize that the agent we talked about last week, our AI secretary, can execute this with no issues at all, no mistakes or doom loops? I want to show him all this new stuff so badly, but I can’t get the tools his team uses to work properly for the simplest use cases.

In this particular case, I’ve had to program a workflow, holding the AI’s “hand” throughout the entire process, making sure every step is contained, a very 2024-esque approach that does no justice at all to the true extent of AI’s powers these days.

This small example beautifully illustrates the gap between SF bros’ perception of how incredibly fast things are changing in AI and the extremely slow diffusion of this technology in the enterprise.

Well, it’s not like enterprise executives are stupid; it’s just that the tools they are given aren't up to the task.

Truth is, enterprise ecosystems are just not ready at all, at least Microsoft, and are single-handedly forcing enterprises to lag behind start-ups in the most consequential technology in the last forty years.

And to be clear, this feels a lot like a Microsoft problem, because I can confirm that the Generative AI experience in the Google Workspace ecosystem is nothing like this. I am migrating from Outlook to Gmail only because of this.

Microsoft had better wake up, or soon enough, they will be completely out of the game.

CONSUMER
Google Updates Google Maps

Google has released an AI-powered update of its Maps tool, Google Maps, the biggest change in over ten years.

Google is adding two major AI-driven upgrades to Maps: Ask Maps and Immersive Navigation.

Ask Maps is a new Gemini-powered conversational feature that lets people ask nuanced real-world questions about places, such as where to charge a phone without a long café wait or where to find a lit public tennis court at night.

It responds with personalized recommendations and a map, using Google’s live Maps data plus signals like places you’ve searched for or saved before. Google says it can also help build itineraries, surface practical tips from reviews, and let you quickly take action by booking, saving, sharing, or navigating to a place. It is rolling out in the US and India on Android and iOS.

The second update, Immersive Navigation, is framed as Google Maps’ biggest navigation redesign in more than a decade.

The post says it will make driving more intuitive with redesigned visuals, clearer route guidance, real-time updates, and more contextual help while traveling.

TheWhiteBox’s takeaway:

This is just the playbook I’ve been screaming at the world with Google ever since I became Google-pilled. Their entire business is becoming ‘Gemini+’, embedding these models into every fiber of each and every product.

If we’re moving toward a declarative world, one where software acts based on your command, Google has a product in the top three in every area, both consumer and enterprise. Hard to compete with that.

ENTERPRISE
Advancing Claude for Excel and PowerPoint

Anthropic has released new updates to its Excel and PowerPoint add-ins. Now, Claude for Excel and PowerPoint share a single conversation context across all open spreadsheets and decks, so work in one app can inform actions in the other. It also brought Skills into both add-ins, turning recurring workflows into reusable one-click actions.

The starter skills are mostly finance-oriented:

  • in Excel, model auditing, DCF/LBO/3-statement templates, comps, and spreadsheet cleanup;

  • in PowerPoint, building competitive decks, updating decks, and checking consistency and polish.

The new cross-app behavior is in beta for Mac and Windows users on paid plans, and Skills are available on all paid plans.

TheWhiteBox’s takeaway:

This is a context engineering dream if they have executed correctly. Think that for every single problem these days you want to solve with Large Language Models (LLMs), the key is whether you are providing the right context for the model to make good decisions.

Thus, exposing “the same Claude” to both applications in the same context ensures that the model understands the tasks more broadly. The question, as always with these things, is whether it’s reliable.

Reliability remains the main problem with agents. If solved, there’s basically no excuse anymore beyond pricing.

Don’t confuse this with what I was saying about Microsoft above; here, we are testing actual hard tasks, while Copilot fails the simplest use cases.

But saying reliability is solved is easier said than done. An agent with what appears to be great 90% accuracy will make at least one mistake in fifty consecutive straight tries at a task 99.5% of the time.

In other words, the chance the model gets 50 consecutive tries (assuming they are independent) correct is just 0.5% (0.950 = 0.005). I’m sorry, that is not deployable for any serious company.

And this reliability issue is plain to see. If we look at popular benchmarks such as METR, which tracks the longest task AI models can execute, measured in human hours, at least 50% accuracy, shows an exploding tendency, yes:

But watch what happens when you require at least 80% (which is still unacceptable for most enterprises).

The graph looks almost identical, but look at the y-axis: the longest task has dropped from ~12 hours to just 1, meaning that when you ask agents to be consistent, performance dramatically falls.

So next time a charlatan comes screaming at you how they can’t understand why companies aren’t using AI as much as one would imagine, just show them this graph.

Closing Thoughts

I apologize for the density of today’s post, much to talk about (I left out a lot I would have wanted to mention too).

Some conclusions for you to entertain:

  1. Debt is AI’s elephant in the room. And it’s a fucking big one. Oracle’s upcoming layoffs may wake society up to this. The problem? Is only going to get worse.

  2. The speed at which OpenAI, Anthropic, and Google release new products is staggering. AI not only makes their tools its raison d’être, but it also allows them to ship faster.

  3. Enterprise AI diffusion will still be slow because enterprise AI tooling sucks big time (e.g., Copilot). Google seems to be the only enterprise-focused company that got the memo. Microsoft, on the other hand, feels lacking vision and, perhaps worse, execution.

  4. Generative AI diffusion will also be slow because the tools are simply not reliable. The industry is hiding the shit under the rug and pretending not to see the glaring reliability issues of Generative AI models, which is a disservice to their own revenue aspirations.

  5. The relationship between AI and War use cases is tricky and very dangerous. If I insist that AIs are unreliable, it’s because I mean it.

For business inquiries, reach me out at [email protected]

Keep Reading