Claude (Anthropic)
Overview
Claude is Anthropic's family of AI models. Anthropic was founded in 2021 by former OpenAI researchers (Dario and Daniela Amodei) with a focus on AI safety. Claude is known for strong coding ability, long context handling, and careful instruction-following.
Model Lineup
Anthropic offers three tiers, each optimized for different use cases:
┌───────────────────────────────────────────────────────────────┐
│ Opus 4.6 Sonnet 4.6 Haiku 4.5 │
│ ──────── ────────── ───────── │
│ Deepest Best coding Fastest │
│ reasoning model, balanced and cheapest │
│ │
│ Complex analysis Daily development High-volume tasks │
│ Research Agentic workflows Quick classification │
│ Architecture Production apps Lightweight agents │
└───────────────────────────────────────────────────────────────┘
Current Models (as of early 2026)
| Model | Context | Strengths | Speed | Best For |
|---|---|---|---|---|
| Opus 4.6 | 200K-1M | Deepest reasoning, nuanced analysis | Slower | Architecture decisions, research, complex debugging |
| Sonnet 4.6 | 200K-1M | Best coding, balanced intelligence + speed | Fast | Daily coding, agentic tasks, production apps |
| Haiku 4.5 | 200K | Fast, cost-efficient, 90% of Sonnet's capability | Fastest | Classification, extraction, high-volume pipelines |
Pricing
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| Opus 4.6 | $15 | $75 |
| Sonnet 4.6 | $3 | $15 |
| Haiku 4.5 | $0.80 | $4 |
Prompt caching reduces input costs by up to 90% for repeated system prompts and context. Cached input pricing: Sonnet $0.30/1M, Haiku $0.08/1M.
Key Features
200K-1M Context Window
Claude can process very long inputs. 200K tokens is standard across all models; Opus and Sonnet support up to 1M tokens.
200K tokens ≈ a 500-page book
1M tokens ≈ an entire codebase (thousands of files)
Use cases: analyze entire repos, process long legal documents, compare multiple papers.
Extended Thinking
Claude can reason internally before answering. It gets a hidden "scratchpad" to work through complex problems step by step.
Without thinking: Immediate response, may miss nuances
With thinking: Model reasons for 5-30 seconds, then gives a more accurate answer
Toggle in Claude.ai: Option+T (macOS) / Alt+T (Windows/Linux).
In the API:
pythonresponse = client.messages.create( model="claude-sonnet-4-20250514", temperature=1, thinking={"type": "enabled", "budget_tokens": 10000}, messages=[{"role": "user", "content": "Your complex question"}] )
Tool Use (Function Calling)
Claude can call external tools/functions you define:
pythontools = [{ "name": "get_weather", "description": "Get current weather for a city", "input_schema": { "type": "object", "properties": { "city": {"type": "string"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]} }, "required": ["city"] } }] response = client.messages.create( model="claude-sonnet-4-20250514", tools=tools, messages=[{"role": "user", "content": "What's the weather in Paris?"}] ) # Claude responds with: tool_use block calling get_weather(city="Paris")
Vision (Image Understanding)
Claude can analyze images: screenshots, diagrams, charts, photos, handwritten notes.
pythonresponse = client.messages.create( model="claude-sonnet-4-20250514", messages=[{ "role": "user", "content": [ {"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": "..."}}, {"type": "text", "text": "What's wrong with this UI? List accessibility issues."} ] }] )
PDF Reading
Claude can read and analyze PDF documents directly, including complex layouts, tables, and figures.
Claude.ai (Web Interface)
The consumer-facing product at claude.ai:
| Feature | What It Does |
|---|---|
| Chat | Conversational interface, supports images and file uploads |
| Projects | Persistent context: upload files, set custom instructions, shared across conversations |
| Artifacts | Code, documents, and diagrams generated in a side panel, editable and downloadable |
| Custom instructions | Personal preferences that apply to all conversations |
| Memory | Claude remembers facts about you across conversations (when enabled) |
Projects
Projects are one of Claude's standout features:
Project: "Backend API Development"
├── Knowledge: architecture.md, api-spec.yaml, db-schema.sql
├── Custom instructions: "Use TypeScript. Follow our REST conventions."
└── Conversations: (all share the project context)
Every conversation in the project has access to the uploaded knowledge and instructions without re-uploading.
Artifacts
When Claude creates substantial content (code, SVGs, HTML, markdown documents), it renders them in an interactive side panel:
- Code — Syntax highlighted, copy-to-clipboard
- HTML/CSS/JS — Live preview in the artifact panel
- SVG/Diagrams — Visual rendering
- Documents — Formatted markdown
Claude Code (CLI)
Claude Code is Anthropic's command-line tool for agentic coding. It reads your files, writes code, and runs commands.
bash# Install npm install -g @anthropic-ai/claude-code # Run in your project directory claude # Or pass a task directly claude "Add input validation to the user registration endpoint"
What It Can Do
- Read and understand your entire codebase
- Write, edit, and create files
- Run terminal commands (build, test, lint)
- Search across files with grep/glob
- Manage git operations
- Multi-file refactoring
Key Concepts
Plan mode: Claude explains what it will do before doing it
Toggle with Option+P / Shift+Tab
Thinking: Extended reasoning before acting
Toggle with Option+T
Auto-accept: Let Claude run commands without confirmation
(Use carefully)
When to Use Claude Code vs Claude.ai
| Task | Claude Code | Claude.ai |
|---|---|---|
| Multi-file code changes | Best choice | Not ideal |
| Understanding a codebase | Great (reads files directly) | Needs manual copy-paste |
| Running tests/builds | Can execute directly | Cannot |
| Writing a document | Works but overkill | Better (artifacts) |
| Quick questions | Works | Better (faster UI) |
| Debugging with context | Best (sees actual files + errors) | Limited |
Claude API
Messages API (Basic Call)
pythonimport anthropic client = anthropic.Anthropic() # Uses ANTHROPIC_API_KEY env var response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, system="You are a helpful coding assistant.", messages=[ {"role": "user", "content": "Write a Python function to validate email addresses."} ] ) print(response.content[0].text)
Streaming
pythonwith client.messages.stream( model="claude-sonnet-4-20250514", max_tokens=1024, messages=[{"role": "user", "content": "Explain microservices."}] ) as stream: for text in stream.text_stream: print(text, end="", flush=True)
Structured Output via Tool Use
Force a specific JSON schema by defining a tool and requiring its use:
pythonresponse = client.messages.create( model="claude-sonnet-4-20250514", tools=[{ "name": "extract_info", "description": "Extract structured information", "input_schema": { "type": "object", "properties": { "name": {"type": "string"}, "age": {"type": "integer"}, "skills": {"type": "array", "items": {"type": "string"}} }, "required": ["name", "age", "skills"] } }], tool_choice={"type": "tool", "name": "extract_info"}, messages=[{"role": "user", "content": "John is 28 and knows Python, Go, and Rust."}] ) # Returns: {"name": "John", "age": 28, "skills": ["Python", "Go", "Rust"]}
Claude vs GPT-4o vs Gemini
| Feature | Claude (Sonnet 4.6) | GPT-4o | Gemini 2.5 Pro |
|---|---|---|---|
| Context window | 200K-1M | 128K | 1M |
| Coding | Excellent (top-tier) | Excellent | Strong |
| Reasoning | Extended thinking | o1/o3 models | Flash thinking |
| Vision | Yes | Yes | Yes (+ video) |
| Tool use | Yes | Yes (function calling) | Yes |
| Image generation | No | Yes (DALL-E) | Yes (Imagen) |
| Web browsing | No (without MCP) | Yes | Yes (Google Search) |
| CLI coding tool | Claude Code | Codex CLI | Gemini CLI (limited) |
| Input pricing | $3/1M | $2.50/1M | $1.25/1M (<200K) |
| Output pricing | $15/1M | $10/1M | $10/1M |
When to Choose Claude
- You need long context (200K-1M tokens)
- Coding tasks (especially agentic multi-file work)
- Instruction following and careful constraint adherence
- Safety-sensitive applications
- Complex reasoning with extended thinking
When to Choose Alternatives
- GPT-4o: Image generation, web browsing, ecosystem (plugins, GPTs)
- Gemini: Google integration, multimodal (video), very long context at lower cost
- Open source: Privacy, offline use, cost at scale (see 13 - Open Source Models)
Quick Tips
- Use extended thinking for complex problems. The accuracy improvement is significant.
- Use projects in Claude.ai to avoid re-uploading context every conversation.
- Prefer Sonnet for 90% of tasks. Use Opus only for the hardest reasoning problems.
- Use Haiku for high-volume classification, extraction, or routing.
- Cache system prompts via the API to reduce costs by up to 90%.
- Use XML tags in prompts. Claude handles structured input with XML particularly well.
Resources
- 🔗 Anthropic Documentation
- 🔗 Claude API Reference
- 🔗 Claude Code
- 🔗 Anthropic Cookbook (examples)
- 🔗 Claude Pricing
- 🔗 Anthropic Research
Previous: 09 - System Prompts & Instructions | Next: 11 - ChatGPT & OpenAI