Learn to Use AI Like I Do

FUTURE
Becoming an AI Expert User Today

Welcome back! This week, we are looking at the first part of a two-parter in which I will give you my honest view of how Generative AI should be used today (or at least how I use it).

Today’s article will focus on how I use it in my daily life; the next part will be how I use it in my businesses. Thus, today’s newsletter includes the following:

  1. How I understand foundation models and how that shapes how I interact with them

  2. A short guide of recommended use cases

  3. My best advice to get the most out of each interaction, from learning why threatening the models works to pro tips like explicit creativity mode or self-consistency.

This is not a technical read; it requires zero knowledge of Generative AI or AI in general. In fact, the less you know, the more useful it will be, but still includes tips that will be helpful to anyone independently of their expertise.

By the end, I guarantee you will have become a better Generative AI user, an expert who is way more deliberate, confident, and prepared than anyone around you.

If AI won’t take your job, but a human using AI will, why not be that human?

Let’s dive in.

A Primer on GenAI - How to Think About an AI Model

I will focus exclusively on Large Language Models, or LLMs, products like ChatGPT or Gemini. Even though AI is much broader, every reference in this newsletter to ‘AI models’ refers to these guys.

Let me start with a non-technical, purely vibes-based intuition about how to think of these products. This sets the tone for every interaction I have with them; it’s literally my test as to whether an AI can help me with ‘x’ or ‘y’ or not.

And I’m pretty sure it will help you, too.

First, a disclaimer: I highly discourage attribution of consciousness, sentience, or any anthropomorphizing of AI models. In fact, I highly encourage you to be deliberate about your treatment of AI models as ‘it’, not ‘he’ or ‘she’.

Never, and I mean never, attribute it characteristics of human nature, as that will backfire. Your brain desires the anthropomorphization of AIs, so if you give in to these desires, you will soon be developing a relationship with these models that can get pretty dark and lead to unwanted behaviors by you and by ‘it’ (and the model will perform worse too, and by worse I mean sycophantic, ending with a product that will endlessly praise you for no reason).

Instead, view AIs as knowledge map tools, and yourself as a knowledge retriever.

To drive this home, let’s use an even more familiar analogy: books. But not the dusty ones sitting on your shelf.

I’m being deliberate about using the word ‘knowledge’ here, as plenty of mechanistic evidence proves these models compress the underlying knowledge in their training set. In other words, they are not mere databases that simply display information back to you, but they actually combine pieces of data into knowledge, which is why the ‘book’ analogy is more fitting.

Let’s start. Picture a book that contains all the knowledge in the world. Every page is a given topic or concept.

However, this is no ordinary book because this knowledge is distributed differently from traditional books, specifically along these lines:

  • The book has frequency bias. Unlike traditional books, which have a topic explained once with no particular tendency to elicit knowledge from one concept over another, here some concepts are more likely to be elicited by this book than others, as if some concepts had just a few pages while others had thousands of them. This is called ‘frequency bias,’ which translates to: the more the book has seen something, the more it has learned about it—and vice versa, naturally.

  • Unlike traditional books, this book can ‘interpolate’ or create new concepts by combining others. The book probably doesn’t have a page about Shakespeare’s love for iPhones because he did not write about them. However, it can interpolate between both known concepts (‘Shakespeare’ and ‘iPhone’) and create something ‘new’ by combining them (a Shakespearean poem on iPhones). Simply put, this book can create new pages it didn’t have before based on others.

  • Some pages are children of others. In other words, to get to a child concept, you most likely will have to pass through a parent concept. For instance, the model will rarely refer to the twenty arrondissements unless you first talk about Paris (Paris is divided into twenty neighborhoods called arrondissement). Put simply, some pages require referring to others to reach them, as if they were hidden unless you first reference the parent page.

  • The book has creative freedom, meaning that among the possible ways it will give you the information, it might improvise and not always provide the most likely answer but one of the most likely. This is what we call hallucinations, which are always portrayed as something bad, but in reality, it’s a feature. All the model does is hallucinate, but we only call it a hallucination when it’s wrong. Thus, it’s a misnomer.

But the biggest difference between this book and a traditional book is how you communicate with it.

This book is not read, it’s talked to.

And how do I get content from this ‘book’?

Unlike traditional books, which have an index that points to the page where each concept/topic lies, in this book, you don’t have a fixed, structured index. Instead, you must use the prompt you send to the book as the index.

In layman’s terms, depending on how you prompt the book, it will return some parts or others. But as we mentioned, the book has creative freedom, so this retrieval is not only based on the quality of your prompt, but it will also have a certain randomness factor in the sense that the returning page will be semantically relevant to your prompt but may be one amongst many possible content pages every time even if your prompt stays fixed.

But why am I telling you this while putting so much focus on this idea of knowledge retrieving? Simple, because that’s all they do; every single use case, product, start-up, you name it, that is based on Generative AI models is playing this retrieval game.

For instance, a company like Sierra, which uses Generative AI models for customer support, is quite literally just a bunch of carefully crafted prompts that instruct and communicate with the model in a way that the model’s internal knowledge on customer support emerges; the knowledge has always been there, it’s just that Sierra has put a lot more effort than you ever will into prompt engineering for customer support.

Of course, Sierra also customizes models with proprietary data. But that is Sierra simply getting an edge over others by creating a new, more specialized book. But at the end of the day, it’s all knowledge retrieval and being pretty damn good at knowledge elicitation, or as the term generally goes: prompt engineering.

I’ll repeat it again. Generative AI models are knowledge maps; fancy books you communicate with to get knowledge back. Thus, we are knowledge retrievers.

Therefore, bad prompts will yield bad outcomes, so most times, the issue isn’t because the book doesn’t have the knowledge to help you; it’s because you were not directing it to the correct ‘pages.

And now, let’s move to what you’re here for: How do I use Generative AI models? How do I become a good ‘book’ prompter?

A Trip Down Use Case Lane

People tend to overcomplicate or exaggerate what can be done with conversation AIs. But at the end of the day, every use case falls into six categories, of which I use five (you’ll see why I avoid the sixth).

Writing

Hold your horses for one second. I’m not going to tell you I write using ChatGPT; I’m not a noob!

Precisely, the first mistake to avoid with LLMs is using them to write instead of using them to assist you in your writing, which is the key difference.

Sadly, most content you read today in social media, consulting reports, student work, and even in many, many research papers (especially Chinese ones that are obviously using it for translation purposes, but that can get a pass) is often too heavily over-indexed on Large Language Models like ChatGPT.

This has turned digital content into a dull, boring reading experience where everything sounds the same. The ‘underscores’, the ‘delves’, and other expressions that have suddenly become all too common are only a product of millions of users using the same bloody product to write.

That is the noob way of using ChatGPT or Gemini, so let me tell you how pros use it.

The first reason why I never use AIs to write is because writing is a crucial tool for understanding. I think I got this from Neil deGrasse Tyson, but he mentioned that he never spoke above stuff he hadn’t written about before.

Or, as a legendary investor from YCombinator and Sam Altman’s discoverer, Paul Graham, always says, “Writing is thinking.” I stand by this heavily, and while writing is one of the main ways I earn my living, it’s also part of my learning toolkit.

In fact, I worry that AIs will disincentivize writing, and that an entire generation of new thinkers won’t be thinking too much actually.

Sowhere do LLMs come into play in my workflow? Easy: They are never about writing but more about editing.

  1. Critique

Just like humans, LLMs are much better writing critics than writers. They are obnoxiously dull when you tell them to write about something, but they are excellent at calling bullshit on your writing, pointing out feeble arguments, fallacies, inaccuracies, and of course, grammatical errors.

Also, critiquing is a much less hallucination-prone use case than writing itself because the context is already provided, making all the difference.

But what do I mean by that?

If you tell an LLM to write about {insert topic of interest}, you have probably given minimal context, maybe just the topic's name. LLMs condition on your context to predict what they need to answer back, so the less context you give them, the worse they behave.

Nonetheless, most failed attempts at implementing Generative AI models point to a single source of failure: a lack of good context. That is why I always recommend enterprises open-source implementations with 1:1 relationships between models and use cases so that the context is part of the model, but that’s a topic for another day.

Instead, in critique exercises, they have the entire article in their context, and their job isn’t about eliciting knowledge but more about checking whether the context you provided suits their knowledge.

Put another way, you aren’t testing whether the model can recall relevant knowledge about the topic; you want them to check the statistical likelihood of your context being true based on their experience.

Think of it this way. What question is easier for you:

  1. What’s the most northern city in the United States?

  2. Is Miami the most northern city in the United States?

One forces you to dig into your knowledge map and recall a fact that or you know or you don’t know, while the other analyzes how likely it is that Miami, a southern city, is the most northern city in the US.

The latter is much easier, right? You don’t have to know the precise city, just reason whether Miami is a likely candidate.

Using LLMs as judges (that’s the term in AI parlance) works really well, and Labs heavily use it.

If the AI Labs do it, so should you.

  1. Refactoring

Based on the same working principles, they are also great at refactoring content, making it cleaner and more structured.

Here, it’s essential to be very clear in your prompting so that the model keeps your style and not overtakes it (we’ll see later why simply stating this instruction works so well), turning your content into AI slop.

Instead, the AI will comply and stay loyal to your writing style.

  1. Fill in the gap

This use case will be particularly relevant when we discuss Deep Research cases, but the point here is that I have a particular way of writing that is nothing like AIs'. Particularly, and that’s just how I write, my pieces are always written in one go.

I’m not a reporting journalist; I’m a storyteller. Thus, if I don’t write the entire piece in one go, the outcome will be a bad story: a chunk of unrelated, poorly narrated sequences of facts about chips and AI models too unbearable to read.

This means that I absolutely hate having to consult my sources every time I don’t remember a given fact; it kills my flow.

So what do I do?

As the AI model has also been part of my research, and most of it is a conversation between me and the AI during my intuition-building phase (more on that later), whenever I don’t recall the exact date or fact, I will insert placeholders throughout my article I then ask the AI to fill in for me without touching any single word in my piece.

I can’t emphasize enough how amazing these models are already at this. Because they have the facts as part of the context of my piece, they know precisely what fact is missing in the placeholder.

For example, from my research, I know China built a lot of solar energy back in 2024. However, when I wrote about it, I couldn’t recall the exact value. Thus, in Monday’s article, I sent the model this precise prompt:

Please read the following text and, without touching the other words, fill in the missing data in the placeholders I've provided below considering our previous conversation.

"""
“China added… [Placeholder 1: find the exact value of solar energy capacity added by China in 2024]… of solar in 2024 alone—which is...[Placeholder 2: compare the value to USA and provide the multiple]… the total installed solar capacity of the US to date."
"""

To which the AI model wrote ‘278 GW’ and ‘more than’ in their respective gaps. The key thing here is that I know what I want to say, the AI is not deciding that on my behalf, it’s just retrieving the exact value I need.

Notice that the second task required some sort of reasoning comparing the Chinese value, 278 GW, to the US’ total deployed solar value (239 GW), which we had discussed prior in the conversation.

Tricky, yet the model does it perfectly.

And talking about retrieval, we move to the following big use case, the most important one of the all in accrued economic value: retrieval.

Retrieval

There’s no better use case for LLMs than search and retrieval, period.

In fact, as we discussed earlier, one could say all they do is search. This makes them the perfect tool (as long as they are equipped with some auxiliary features we will comment on) for quickly retrieving information from the Internet or from their own knowledge.

However, how you approach this search will determine whether the AI does a good job or a terrible one.

Subscribe to Full Premium package to read the rest.

Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.

A subscription gets you:

  • • NO ADS
  • • An additional insights email on Tuesdays
  • • Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more