freshcrate

Glossary

34terms across 10 crates. Each links back to where it's introduced.

A

AGI (Artificial General Intelligence)
Hypothetical AI that can learn and perform any intellectual task a human can. Does not exist yet.→ Crate #10
AI Agent
An AI system that can take autonomous actions in the world using tools, not just generate text. Follows an observe-think-act loop.→ Crate #10
Artificial Intelligence (AI)
A human-made system that can learn, reason, and solve problems. Not one single invention — more like a toolbox of techniques.→ Crate #1
Attention Mechanism
The key innovation in Transformers — lets the model learn which words (or tokens) to focus on when processing each part of the input.→ Crate #6

B

Backpropagation
The algorithm for training neural networks — calculates error, works backwards through layers, and adjusts weights to reduce future errors.→ Crate #4
Bias (in data)
When training data over-represents or under-represents certain groups, leading the model to perform unevenly.→ Crate #3

C

CNN (Convolutional Neural Network)
A neural network designed for images. Slides small filters across the image to detect patterns at increasing levels of complexity.→ Crate #5
Computer Vision
The field of making computers understand images and video — classification, detection, segmentation, and generation.→ Crate #5

D

Deep Learning
Neural networks with many hidden layers. 'Deep' refers to the layer count, not philosophical depth.→ Crate #4
Deepfake
AI-generated fake video or audio depicting real people saying or doing things they never did.→ Crate #8
Diffusion Model
A generative model that learns to remove noise from images. Generation starts with random noise and progressively denoises it into a coherent image.→ Crate #9

G

GAN (Generative Adversarial Network)
Two networks competing: a generator creates fakes, a discriminator detects them. They improve by training against each other.→ Crate #9

H

Hallucination
When an AI model confidently generates information that sounds plausible but is factually incorrect.→ Crate #1

L

Learning Rate
How much weights are adjusted on each training step. Too high = overshooting. Too low = painfully slow convergence.→ Crate #4
LLM (Large Language Model)
A massive transformer trained on internet-scale text to predict the next word. At scale, this produces surprisingly general capabilities.→ Crate #6

M

Machine Learning (ML)
A subset of AI where computers learn rules from data rather than being explicitly programmed.→ Crate #2

N

Narrow AI
AI that excels at one specific task (like playing chess or recognizing faces) but can't generalize to other tasks. All current AI is narrow.→ Crate #1
Neural Network
A model architecture inspired by biological neurons. Layers of simple mathematical units that collectively learn complex patterns.→ Crate #4
NLP (Natural Language Processing)
The field of making computers understand, generate, and work with human language.→ Crate #6

O

Object Detection
Identifying what objects are in an image AND where each one is located (bounding boxes).→ Crate #5
Overfitting
When a model memorizes training data (including noise) instead of learning general patterns. Performs well on training data, poorly on new data.→ Crate #7

P

Parameters / Weights
The millions (or billions) of numbers inside a model that get adjusted during training. They encode what the model has learned.→ Crate #2

R

ReAct Pattern
Reason + Act — the core agent loop: observe the situation, reason about what to do (using an LLM), take an action (using a tool), observe the result, repeat.→ Crate #10
Reinforcement Learning
Learning by trial and error. The agent takes actions, receives rewards or penalties, and adjusts its strategy.→ Crate #2

S

Supervised Learning
Training a model with both inputs (questions) and correct outputs (answers). The most common form of ML.→ Crate #2
Synthetic Data
Artificially generated data that mimics real data. Used to augment limited real datasets.→ Crate #3

T

Temperature
A setting that controls randomness in AI output. Higher temperature = more creative/random. Lower = more deterministic/focused.→ Crate #6
Tool Use
Giving an AI access to external tools (browser, code runner, APIs) so it can do real work beyond generating text.→ Crate #10
Training Data
The examples fed to a model during training. Quality and diversity of this data directly determine model performance.→ Crate #3
Transformer
The dominant neural network architecture for language (and increasingly other domains). Uses 'attention' to learn which parts of the input are relevant to each other.→ Crate #6
Turing Test
A test proposed by Alan Turing: if a human can't tell whether they're chatting with a machine or a person, the machine passes.→ Crate #1

U

Underfitting
When a model is too simple to capture the patterns in the data.→ Crate #7
Unsupervised Learning
Training a model with data but no labels. The model discovers patterns and groups on its own.→ Crate #2

W

Word Embeddings
Representing words as lists of numbers (vectors) where similar words have similar vectors. Enables math on language: King - Man + Woman = Queen.→ Crate #6