Last updated on August 12th, 2025 at 12:37 pm
Have you ever explored how llms work? If not, how can we talk of AEO and GEO and so on? Most marketing strategies today are still based on metaphors that no longer apply. Terms like “ranking,” “indexing,” and “domain authority” may be appropriate for search, but they have little meaning in the architecture of a Large Language Model.
This is where things get technical. While it might feel overwhelming at first, I encourage you to read slowly, pause, and let the ideas settle.
To influence a system, even indirectly, you need to first understand how it works.
Unlike traditional search engines, LLMs like GPT-5, Claude 4.1 Opus, or Gemini w.5 Pro / Flash do not retrieve web pages and choose the “best one.” Instead, they generate answers word by word based on patterns learned during pre-training, shaped during fine-tuning, and generally enhanced by real-time tools like search or code interpreters.
This chapter explains how that process works:
- How LLMs are trained
- How they encode language and meaning
- How their internal structure allows emergent abilities like attempt at reasoning, planning, and memory-like behavior
- Why outputs are fluent but unpredictable
- And why direct optimization is impossible, but indirect influence is probable.
We’ll cover key concepts such as embeddings, self-attention, transformers, token prediction, and reinforcement learning with human feedback (RLHF).
We’ll also demystify why LLMs “sound smart” even when they’re wrong, and how the illusion of reasoning emerges from statistical computation.

Picture credit: Rateb Al Drobi, https://www.linkedin.com/posts/rateb-al-drobi_post-410-how-llms-actually-generate-text-activity-7300199524680032256-njGZ/
This collection of posts will also include references to retrieval-augmented generation (RAG) and multi-modal capabilities, as many LLMs now use external tools (search, calculators, code) and work with not just text, but images, audio, and even video.
Here below you can find the posts that have been written by extrapolating from the full research paper. For your convenience, here’s the natural order of the articles and how they should be read:
TL;DR Browsing models fan-out multiple short queries, fetch top results, skim titles + intros, and compose a synthetic answer. Citations…
TL;DR LLMs provide direct answers to queries, reducing clicks on traditional SERPs. Ranking signals like backlinks and CTR lose importance;…
TL;DR You can’t rank on ChatGPT like on Google Search. LLMs don’t use ranking systems. LLMs rely on static training…
Continue Reading Why you can’t rank on ChatGPT and other LLMs
TL;DR Pre-Training teaches LLMs to predict the next word, not to “understand” meaning. Models are trained on trillions of tokens…
Continue Reading Inside LLMs: How Pre‑Training Shapes What ChatGPT Knows
TL;DR Neural networks are layered architectures where each layer extracts patterns and relationships from text. Attention mechanisms allow LLMs to…
TL;DR RLHF (Reinforcement Learning with Human Feedback) uses human annotators to rank model outputs and train reward models that align…
Continue Reading Inside LLMs: RLHF, RLAIF & the Evolution of Model Alignment
TL;DR LLMs don’t “know” facts. they predict tokens based on training data. Their knowledge is limited to the pre‑training corpus…
Continue Reading Inside LLMs: why LLMs don’t really “know” things
TL;DR Transformers are the core architecture of modern LLMs like GPT‑4, Claude, and Gemini. Each transformer block includes multi‑head attention,…
Continue Reading Inside LLMs: Understanding Transformer Architecture – A Guide for Marketers

Pietro Mingotti is an Italian neural science researcher, entrepreneur and technical marketing specialist, best known as the founder and owner of Fuel LAB®, a leading digital marketing and technical marketing agency based in Italy, operating worldwide. With a passion for science, creativity, innovation, and technology, Pietro has established himself as a thought leader in the field of technical marketing and data science and has helped numerous companies achieve their goals.

