25 essential terms across three tiers. Each one comes with an ELI5 (Explain Like I'm 5) simple breakdown, the real definition, practical context, and products you can try today.
Core concepts everyone should know before using any AI tool. To make these basic concepts stick, we'll be using an analogy of a robot student going to school {0_0}
| # | Term | ELI5 — 'Robot Student Analogy' | Definition | Usage | Products |
|---|---|---|---|---|---|
| 01 | Artificial Intelligence (AI) | A robot... {0_0} "beep boop" — a computer that can do similar things that a human brain can, like recognizing pictures, understanding language, or making decisions. | A broad field of computer science focused on building systems capable of performing tasks that typically require human intelligence, such as reasoning, learning, and perception. | When someone says an app "uses AI," they mean it can learn patterns, make predictions, or generate content without being explicitly programmed for every scenario. | SiriGoogle AssistantTesla AutopilotNetflix Recommendations |
| 02 | Machine Learning (ML) | How the robot studies — This is the methodology behind how it learns. Instead of telling the robot student every rule, you give them a bunch of examples and let them figure out the rules on their own. | A subset of AI where algorithms improve their performance on a task by learning from data, rather than following hard-coded instructions. | Used when you have lots of data and want the system to find patterns — like detecting spam emails, recommending songs, or predicting prices. | Google ColabHugging FaceKaggleTensorFlow (Google) |
| 03 | Training | The study session — the period where the robot student actually sits down and reviews examples. Once the study session is done, the brain is shaped and ready to perform. | The computational process of optimizing a model's parameters by exposing it to large datasets, allowing it to learn patterns, relationships, and structures. Training can take days to months and requires significant compute resources. | Before an AI can do anything useful, it needs to be trained. ChatGPT was trained on billions of web pages, books, and articles. Training is expensive — GPT-4 reportedly cost over $100M to train. | Common Crawl (web data)The Pile (open dataset)AWS/Google Cloud (compute)NVIDIA GPUs |
| 04 | Model | The robot's brain — it's what the robot student has after training is done. All those study sessions shaped how they think. The brain is trained and ready to answer questions. | A trained AI system that has learned patterns from data and can make predictions or generate outputs. Models vary in size, architecture, and specialization. | When people say "which model are you using?" they mean which trained AI system — GPT-4, Claude, Gemini, etc. Different models have different strengths, costs, and capabilities. | GPT (OpenAI)Claude (Anthropic)Gemini (Google)Llama (Meta)Mistral |
| 05 | Large Language Model (LLM) | A robot brain that specializes in reading and writing — this brain is powerful but language is its specialty and core strength. | A type of neural network trained on massive text datasets that can generate, understand, and reason about human language. Measured in billions of parameters. | The engine behind chatbots, writing assistants, and code generators. When you chat with ChatGPT or Claude, you're talking to an LLM. | Claude Sonnet 4.6 (Anthropic)GPT-5.2 (OpenAI)Gemini Flash (Google)Llama Maverick (Meta) |
| 06 | Chatbot | The robot student's face and voice — the LLM is the brain inside, but the chatbot is what you actually see and talk to. Same student, just the outside versus the inside. | Products like ChatGPT and Claude are chatbots — they're the most common way people interact with AI today. A chatbot is a software application designed to simulate conversation with human users, powered by an LLM under the hood. | Used for customer support, personal assistants, education, and entertainment. Modern AI chatbots can handle complex, multi-turn conversations. | chatgpt.comClaude.aigemini.google.comCharacter.aiIntercom.com (b2b) |
| 07 | Token | A word (or piece of a word) — it's the unit of measurement. The prompt, the context, the response — they're all measured in tokens. | The basic unit of text that a language model processes. Text is split into tokens (words, subwords, or characters) before being fed into the model. Token count affects cost and context limits. | LLMs charge per token and have token limits. GPT-4 can handle ~128K tokens of context. Knowing token counts helps you manage costs and fit within model limits. | OpenAI Tokenizer |
| 08 | Prompt (aka Input) | The exam question (measured in tokens) — it's what you put in front of the robot student. The better and clearer the question, the better the answer you get back. | The input text provided to a language model that guides its response. Can include instructions, context, examples, or constraints. | Every interaction with an AI chatbot starts with a prompt. Writing effective prompts (prompt engineering) is a key skill for getting useful AI outputs. | ChatGPTClaudeGeminiMidjourneyDALL-E |
| 09 | Context | The open-book notes (measured in tokens) — the extra information you lay out on your desk during an open-book test so the robot student can reference it while answering. | The surrounding information provided alongside a prompt that helps the model understand the situation and generate a more relevant response. | Adding context to your prompt improves accuracy. Instead of asking "What should I do?" you provide background: "I'm a beginner learning Python and I'm stuck on loops. What should I do?" | ChatGPTClaudeGeminiPerplexity |
| 10 | Context Window | The amount of desk space for notes + question + answer (measured in tokens) — limits how many open-book notes you can lay out at once. A bigger desk fits more context. When it's full, the oldest pages fall off the edge. | The maximum number of tokens a model can process in a single interaction, including both the input prompt and the generated output. | If your context window is 128K tokens, you can paste in an entire book and ask questions about it. Smaller windows mean you need to be more selective about what you include. | Claude (200K tokens)GPT-4 Turbo (128K)Gemini 1.5 Pro (1M tokens)Llama 3 (8K–128K) |
| 11 | Inference | Thinking through the answer — the robotstudent reads the question and the open-book notes, then reasons through what they know to figure out the best answer. | The process of a trained model taking in inputs and context, then reasoning through its learned parameters to produce an output. This is the operational phase, as opposed to the training phase. | Every time you send a message to ChatGPT, that's an inference call. Inference speed and cost are major factors in deploying AI at scale. | ChatGPTClaudeGeminiPerplexity |
| 12 | Response (aka Output) | The robot's answer to the test question — what the robot student writes down after thinking it through. The inference is the thinking, the response is what you get back. | The text, code, or content generated by a model after processing the input prompt and context through inference. Responses vary in length, quality, and format depending on the model and prompt. | When ChatGPT replies to your message, that reply is the response. Response quality depends on prompt clarity, context provided, and model capability. Responses are measured in tokens and factor into cost. | ChatGPTClaudeGeminiCopilotPerplexity |
| 13 | Hallucination | Confidently writing a wrong answer — the robot student doesn't know the answer, so their brain fills in the gaps with something that sounds right instead of saying "I don't know." | When a language model generates text that is factually incorrect, fabricated, or inconsistent with reality, despite sounding plausible and confident. | A major limitation of LLMs. Always fact-check AI-generated content, especially for research, medical, legal, or financial information. RAG and grounding techniques help reduce hallucinations. | All LLMs can hallucinatePerplexity (cites sources to reduce this)Google Gemini with Search Grounding |
| 14 | AI Ethics | The code of conduct — guidelines for how the robot student should behave: don't cheat, don't be biased, don't hurt people, and be honest about what you are. | A set of values, principles, and techniques (e.g., fairness, transparency, accountability, privacy) that guide the responsible development and use of AI systems to align with human rights and societal well-being. | As AI becomes more powerful, ethical questions grow: Should AI make hiring decisions? Who's responsible when an AI causes harm? Should AI-generated content be labeled? Companies increasingly have AI ethics teams and guidelines. | Anthropic (Constitutional AI)OpenAI Safety TeamGoogle DeepMind EthicsPartnership on AIEU AI Act |
A robot... {0_0} "beep boop" — a computer that can do similar things that a human brain can, like recognizing pictures, understanding language, or making decisions.
How the robot studies — This is the methodology behind how it learns. Instead of telling the robot student every rule, you give them a bunch of examples and let them figure out the rules on their own.
The study session — the period where the robot student actually sits down and reviews examples. Once the study session is done, the brain is shaped and ready to perform.
The robot's brain — it's what the robot student has after training is done. All those study sessions shaped how they think. The brain is trained and ready to answer questions.
A robot brain that specializes in reading and writing — this brain is powerful but language is its specialty and core strength.
The robot student's face and voice — the LLM is the brain inside, but the chatbot is what you actually see and talk to. Same student, just the outside versus the inside.
A word (or piece of a word) — it's the unit of measurement. The prompt, the context, the response — they're all measured in tokens.
The exam question (measured in tokens) — it's what you put in front of the robot student. The better and clearer the question, the better the answer you get back.
The open-book notes (measured in tokens) — the extra information you lay out on your desk during an open-book test so the robot student can reference it while answering.
The amount of desk space for notes + question + answer (measured in tokens) — limits how many open-book notes you can lay out at once. A bigger desk fits more context. When it's full, the oldest pages fall off the edge.
Thinking through the answer — the robotstudent reads the question and the open-book notes, then reasons through what they know to figure out the best answer.
The robot's answer to the test question — what the robot student writes down after thinking it through. The inference is the thinking, the response is what you get back.
Confidently writing a wrong answer — the robot student doesn't know the answer, so their brain fills in the gaps with something that sounds right instead of saying "I don't know."
The code of conduct — guidelines for how the robot student should behave: don't cheat, don't be biased, don't hurt people, and be honest about what you are.