Anthropic published an official guide to session management in Claude Code, written by Thariq Shihipar. The core message: context management shapes your experience with Claude Code more than most users realize. Having 1M tokens doesnβt mean you should use all of them β context rot (attention spreading across stale, irrelevant tokens) degrades quality silently. The most useful part of the post is a decision table that maps every common situation to exactly the right session tool.
Source: Claude Code: Session Management and 1M Context (Anthropic, April 2026)
The Situation Table
This is the centerpiece. Memorize it or pin it next to your terminal.
| Situation | Reach for | Why |
|---|---|---|
| Same task, context is still relevant | Continue | Everything in the window is still load-bearing; donβt pay to rebuild it |
| Claude went down a wrong path | Rewind (double-Esc) | Keep the useful file reads, drop the failed attempt, re-prompt with what you learned |
| Mid-task but session is bloated with stale debugging/exploration | /compact <hint> |
Low effort; Claude decides what mattered. Steer it with instructions if needed |
| Starting a genuinely new task | /clear |
Zero rot; you control exactly what carries forward |
| Next step will generate lots of output youβll only need the conclusion from | Subagent | Intermediate tool noise stays in the childβs context; only the result comes back |
Decision Flowchart
Task complete. What now?
β
ββ Same task, context still fresh?
β βββΊ Continue (don't pay to rebuild context)
β
ββ Claude went the wrong way?
β βββΊ Esc Esc (rewind)
β Keep file reads, drop the bad attempt
β Re-prompt with what you learned
β
ββ Session feels bloated / slow?
β βββΊ /compact <hint>
β Claude summarizes; you steer with the hint
β
ββ Completely new task?
β βββΊ /clear
β Write a fresh brief, zero context rot
β
ββ Next step = heavy output, only need the conclusion?
βββΊ Subagent
Noise stays in child context
Only the result comes back
Context Rot β The Invisible Problem
The 1M context window is huge, but bigger isnβt always better. Context rot happens when:
Fresh session: [system prompt] [your message] [relevant code]
β High attention density, precise output
Bloated session: [system prompt] [old debug logs] [failed approach #1]
[irrelevant file reads] [stale exploration] [your message]
β Attention spread thin, model "distracted" by noise
The model doesnβt forget old tokens β it actively attends to them. Stale debugging output and abandoned approaches dilute the signal-to-noise ratio. This is why βpassing the context limitβ isnβt the only problem β degraded quality within the limit is the sneakier failure mode.
The Five Tools in Detail
1. Continue
The default. Keep going in the same session. Use when the context is still load-bearing β every file read, every decision, every piece of state matters for the next step.
When it breaks down: After long debugging sessions, exploratory searches, or multiple failed approaches. The context now contains as much noise as signal.
2. Rewind (Esc Esc)
Double-tap Escape to jump back to a previous message. Claudeβs state rolls back, but file reads are preserved β you keep the useful context and drop only the failed attempt.
The key insight: Donβt correct inline (βno, not that, try thisβ). Rewind to the point where Claude had the right context, then re-prompt with learned constraints:
Before: "Fix the bug" β [bad attempt] β "No, I meant..."
After: "Fix the bug" β [bad attempt] β Esc Esc β
"Fix the bug by changing X, don't touch Y"
3. /compact <hint>
Triggers a manual compaction. Claude summarizes the session into a compressed brief, keeping what it thinks matters. The optional <hint> steers the summary:
/compact focus on the auth middleware changes, drop the debugging tangent
Risk: If the model canβt predict where your work is going (common after long debugging sessions), it may drop context you actually needed. Bad autocompact typically happens when βthe model canβt predict the direction your work is going.β
Mitigation: Use the hint to tell it what matters.
4. /clear
Nuclear option β wipe the session entirely. You write the brief from scratch. More work, but you control exactly what carries forward.
Best for: Genuine task switches. Write 2-3 sentences of context for the new task. The mental model: pretend youβre briefing a new person.
5. Subagent
Delegate a self-contained task to a child agent. The child gets a clean context, does the work, and returns only the conclusion. All intermediate tool noise (file reads, grep output, build logs) stays in the childβs context.
Mental test: βWill I need this tool output again, or just the conclusion?β
Good subagent tasks:
- Codebase search / exploration
- Verification against a spec
- Documentation generation from git changes
- Running and analyzing test output
When to Start a New Session
The articleβs general principle:
βWhen you start a new task, you should also start a new session.β
Even with 1M tokens, context rot may still occur. The cost of starting fresh is low (a 2-3 sentence brief), but the cost of context rot is invisible until your outputs degrade.
Session length vs. quality:
Quality β ββββββββββββββββββββββββ
β ββββββββββββββββββββββ
β ββββββββββββββββββββ β context rot starts
β ββββββββββββββββ
β ββββββββββββββ β quality silently degrades
β ββββββββββββ
ββββββββββββββββββββββββββββββββ Context size β
0 200K 500K 1M tokens
/compact vs. /clear β When to Use Which
| Factor | /compact |
/clear |
|---|---|---|
| Effort | Low β Claude does the work | Higher β you write the brief |
| Control | Model decides what matters | You decide what matters |
| Risk | May drop context you needed | You might forget to include something |
| Best when | Mid-task, need to shed noise | Task switch, need a clean slate |
| Context quality | Good if you hint well | Excellent if you brief well |
| When it fails | After long debugging tangents | When the task is too complex to summarize in 2-3 sentences |
How LearnAI Team Could Use This
- CS305 students using Claude Code for assignments: Teach them the situation table as a decision framework. Most beginners either never start new sessions (context rot) or start fresh too often (wasting context). The table gives them a rubric.
- Research workflows (PCSAT, proof writing): Long proof sessions are exactly where context rot hits hardest. Use
/compact focus on the proof state and pending lemmasto shed the exploratory noise while keeping the mathematical context. - LAI project development: When switching between slide-generator features, Codex review, and wiki writing β use
/clearbetween genuinely different tasks rather than letting one session accumulate cross-domain noise. - Teaching context engineering: This table is a great pedagogical tool for explaining why context management matters β itβs not about running out of space, itβs about attention quality.
Real-World Use Cases
| Scenario | Tool | Example |
|---|---|---|
| Writing a feature, tests pass, moving to next feature | /clear |
Brief: βAuth middleware done. Now add rate limiting to POST /tasksβ |
| Debugging a failing test, tried 3 approaches, none worked | Rewind | Double-Esc back to after file reads, re-prompt with constraints from failed attempts |
| Implementing a plan, 45 min in, response quality dropping | /compact |
/compact focus on the remaining plan steps and current file state |
| Need to check if a function exists before using it | Subagent | βSearch the codebase for any existing rate limiter middlewareβ |
| Proof assistant session, 20 lemmas deep, exploring a side branch | /compact |
/compact keep the main proof state and proved lemmas, drop the exploratory branch |
| Switching from coding to writing a wiki entry | /clear |
Completely different task, different tools, different context needs |
The One-Line Summary
Context management shapes your experience more than the model itself. Use the situation table to make the right call every time β Continue, Rewind, Compact, Clear, or Subagent.