AI fundamentals
Source: OpenAI Blog
What is artificial intelligence?
Artificial intelligence (AI) is a broad category of software that can recognize patterns, learn from data, and produce useful outputs. You’ve probably seen AI in everyday moments, such as:
- A map app rerouting you around traffic
- Your bank flagging a purchase as “unusual”
- A customer‑support chatbot answering common questions
AI is a category—not a single tool. Within that category are models: trained systems that learn from data and then apply what they’ve learned to new situations. Some models specialize in speech, vision, or forecasting.
Large language models (LLMs)
If you’re starting your AI journey with conversational tools like ChatGPT, you’re working with large language models. An LLM is a model designed to work with language. It learns patterns from massive text corpora and can generate or transform text in helpful ways. An LLM doesn’t “know” things the way a person does; it predicts the most likely next piece of language based on context.
Research labs such as OpenAI build these models and make them available through user‑facing products (e.g., ChatGPT, Codex) and APIs, allowing developers to integrate AI into their own software.
Training process
When an AI model is said to be “trained,” it usually involves two stages:
Pre‑training
The model learns general patterns from a huge amount of text, gaining broad skills such as summarizing, drafting, translating, and explaining. Think of it as a new employee reading everything—manuals, examples, past projects—until they grasp the overall shape of the job.
Post‑training (fine‑tuning)
A “manager” coaches the model: clarifying instructions, matching tone, and enforcing policies. This stage improves the model’s ability to follow instructions, communicate effectively, and handle tricky situations. Safety checks are emphasized here to reduce harmful outputs, avoid unwanted requests, and respond carefully to sensitive or uncertain topics.
Model types
Different models are tuned for different trade‑offs—speed, depth, and adherence to multi‑step instructions.
Non‑reasoning models (often labeled “Instant”)
Optimized for fast, fluent output. Ideal for straightforward tasks where momentum matters, such as turning notes into a message, polishing wording, generating options, or extracting key points.
Reasoning models (often labeled “Thinking”)
Trained for deliberate, step‑by‑step problem solving—planning, complex analysis, tricky debugging, or decisions with constraints and edge cases. They may take longer but tend to track multiple moving parts and avoid shallow mistakes.
If you’re just getting started, you don’t need to worry about model choice—the default ChatGPT experience auto‑switches between these modes so you can focus on your question, not the settings.
Choosing the right setting
As you become familiar with your preferences (speed vs. depth, quick drafts vs. careful analysis), you can experiment with optional controls:
- Auto – the default, suitable for most tasks.
- Thinking – switch to this when a task is complex or high‑stakes.
Simple hierarchy
- AI – the overall field
- Models – trained systems that perform particular tasks
- Large language models (LLMs) – models focused on understanding and generating language, built by AI research labs
- ChatGPT – a product that lets you use an LLM effectively
With this picture in mind, you’ll be ready to learn how to get great results with tools like ChatGPT—starting with how to phrase your prompts to achieve the outcomes you want.