Skip to content

Browsing and Enriching Past Sessions

ctx

The Problem

After weeks of AI-assisted development you have dozens of sessions scattered across JSONL files in ~/.claude/projects/. Finding the session where you debugged the Redis connection pool, or remembering what you decided about the caching strategy three Tuesdays ago, often means grepping raw JSON.

There is no table of contents, no search, and no summaries.

This recipe shows how to turn that raw session history into a browsable, searchable, and enriched journal site you can navigate in your browser.

TL;DR

Export and Generate

ctx recall export --all
ctx journal site --serve

Enrich

/ctx-journal-enrich-all

Rebuild

ctx journal site --serve

Read on for what each stage does and why.

Commands and Skills Used

Tool Type Purpose
ctx recall list Command List parsed sessions with metadata
ctx recall show Command Inspect a specific session in detail
ctx recall export Command Export sessions to editable journal Markdown
ctx journal site Command Generate a static site from journal entries
ctx journal obsidian Command Generate an Obsidian vault from journal entries
ctx serve Command Serve any zensical directory (default: journal)
/ctx-recall Skill Browse sessions inside your AI assistant
/ctx-journal-enrich Skill Add frontmatter metadata to a single entry
/ctx-journal-enrich-all Skill Full pipeline: export if needed, then batch-enrich

The Workflow

The session journal follows a four-stage pipeline.

Each stage is idempotent and safe to re-run:

By default, each stage skips entries that have already been processed.

export -> enrich -> rebuild
Stage Tool What it does Skips if Where
Export ctx recall export --all Converts session JSONL to Markdown File already exists (safe default) CLI or agent
Enrich /ctx-journal-enrich-all Adds frontmatter, summaries, topic tags Frontmatter already present Agent only
Rebuild ctx journal site --build Generates browsable static HTML N/A CLI only
Obsidian ctx journal obsidian Generates Obsidian vault with wikilinks N/A CLI only

Where Do You Run Each Stage?

Export (Steps 1 to 3) works equally well from the terminal or inside your AI assistant via /ctx-recall. The CLI is fine here: the agent adds no special intelligence, it just runs the same command.

Enrich (Step 4) requires the agent: it reads conversation content and produces structured metadata.

Rebuild and serve (Step 5) is a terminal operation that starts a long-running server.

Step 1: List Your Sessions

Start by seeing what sessions exist for the current project:

ctx recall list

Sample output:

Sessions (newest first)
=======================

  Slug                           Project   Date         Duration  Turns  Tokens
  gleaming-wobbling-sutherland   ctx       2026-02-07   1h 23m    47     82,341
  twinkly-stirring-kettle        ctx       2026-02-06   0h 45m    22     38,102
  bright-dancing-hopper          ctx       2026-02-05   2h 10m    63     124,500
  quiet-flowing-dijkstra         ctx       2026-02-04   0h 18m    11     15,230
  ...

Slugs Look Cryptic?

These auto-generated slugs (gleaming-wobbling-sutherland) are hard to recognize later.

Use /ctx-journal-enrich to add human-readable titles, topic tags, and summaries to exported journal entries, making them easier to find.

Filter by project or tool if you work across multiple codebases:

ctx recall list --project ctx --limit 10
ctx recall list --tool claude-code
ctx recall list --all-projects

Step 2: Inspect a Specific Session

Before exporting everything, inspect a single session to see its metadata and conversation summary:

ctx recall show --latest

Or look up a specific session by its slug, partial ID, or UUID:

ctx recall show gleaming-wobbling-sutherland
ctx recall show twinkly
ctx recall show abc123

Add --full to see the complete message content instead of the summary view:

ctx recall show --latest --full

This is useful for checking what happened before deciding whether to export and enrich it.

Step 3: Export Sessions to the Journal

Export converts raw session data into editable Markdown files in .context/journal/:

# Export all sessions from the current project
ctx recall export --all

# Export a single session
ctx recall export gleaming-wobbling-sutherland

# Include sessions from all projects
ctx recall export --all --all-projects

--keep-frontmatter=false Discards Enrichments

--keep-frontmatter=false discards enriched YAML frontmatter during regeneration.

Back up your journal before using this flag.

Each exported file contains session metadata (date, time, duration, model, project, git branch), a tool usage summary, and the full conversation transcript.

Re-exporting is safe. Running ctx recall export --all only exports new sessions: Existing files are never touched. Use --dry-run to preview what would be exported without writing anything.

To re-export existing files (e.g., after a format improvement), use --regenerate: Conversation content is regenerated while preserving any YAML frontmatter you or the enrichment skill has added. You'll be prompted before any files are overwritten.

--regenerate Replaces the Markdown Body

--regenerate preserves YAML frontmatter but replaces the entire Markdown body with freshly generated content from the source JSONL.

If you manually edited the conversation transcript (added notes, redacted sensitive content, restructured sections), those edits will be lost.

BACK UP YOUR JOURNAL FIRST.

To protect entries you've hand-edited, you can explicitly lock them:

ctx recall lock <pattern>

Locked entries are always skipped, regardless of flags.

If you prefer to add locked: true directly in frontmatter during enrichment, run ctx recall sync to propagate the lock state to .state.json:

ctx recall sync

See ctx recall lock --help and ctx recall sync --help for details.

Step 4: Enrich with Metadata

Raw exports have timestamps and transcripts but lack the semantic metadata that makes sessions searchable: topics, technology tags, outcome status, and summaries. The /ctx-journal-enrich* skills add this structured frontmatter.

Locked entries are skipped by enrichment skills, just as they are by export. Lock entries you want to protect before running batch enrichment.

Batch enrichment (recommended):

/ctx-journal-enrich-all

The skill finds all unenriched entries, filters out noise (suggestion sessions, very short sessions, multipart continuations), and processes each one by extracting titles, topics, technologies, and summaries from the conversation.

It shows you a grouped summary before applying changes so you can scan quickly rather than reviewing one by one.

For large backlogs (20+ entries), the skill can spawn subagents to process entries in parallel.

Single-entry enrichment:

/ctx-journal-enrich twinkly
/ctx-journal-enrich 2026-02-06

Each enriched entry gets YAML frontmatter like this:

---
title: "Implement Redis caching middleware"
date: 2026-02-06
type: feature
outcome: completed
topics:
  - caching
  - api-performance
technologies:
  - go
  - redis
libraries:
  - go-redis/redis
key_files:
  - internal/cache/redis.go
  - internal/api/middleware/cache.go
---

The skill also generates a summary and can extract decisions, learnings, and tasks mentioned during the session.

Step 5: Generate and Serve the Site

With exported and enriched journal files, generate the static site:

# Generate site structure only
ctx journal site

# Generate and build static HTML
ctx journal site --build

# Generate, build, and serve locally
ctx journal site --serve

Then open http://localhost:8000 to browse.

The site includes a date-sorted index, individual session pages with full conversations, search (press /), dark mode, and enriched titles in the navigation when frontmatter exists.

You can also serve an already-generated site without regenerating using ctx serve (serve-only, no regeneration).

The site generator requires zensical (pipx install zensical).

Where Does Agent Add Value?

Export, list, and show are mechanical: The agent runs the same CLI commands you would, so you can stay in your terminal for those.

The agent earns its keep in enrich. It reads conversation content, understands what happened, and produces structured metadata. That is agent work, not CLI work.

You can also ask your agent to browse sessions conversationally instead of remembering flags:

What did we work on last week?
Show me the session about Redis.
Export everything to the journal.

This is convenient but not required: ctx recall list gives you the same inventory.

Where the agent genuinely helps is chaining the pipeline:

You:   What happened last Tuesday?
Agent: Last Tuesday you worked on two sessions:
       - bright-dancing-hopper (2h 10m): refactored the middleware
         pipeline and added Redis caching
       - quiet-flowing-dijkstra (18m): quick fix for a nil pointer
         in the config loader
       Want me to export and enrich them?
You:   Yes, do it.
Agent: Exports both, enriches, then proposes frontmatter.

The value is staying in one context while the agent runs export -> enrich without you manually switching tools.

Putting It All Together

A typical pipeline from raw sessions to a browsable site:

# Terminal: export and generate
ctx recall export --all
ctx journal site --serve
# AI assistant: enrich
/ctx-journal-enrich-all
# Terminal: rebuild with enrichments
ctx journal site --serve

If your project includes Makefile.ctx (deployed by ctx init), use make journal to combine export and rebuild stages. Then enrich inside Claude Code, then make journal again to pick up enrichments.

Session Retention and Cleanup

Claude Code does not keep JSONL transcripts forever. Understanding its cleanup behavior helps you avoid losing session history.

Default Behavior

Claude Code retains session transcripts for approximately 30 days. After that, JSONL files are automatically deleted during cleanup. Once deleted, ctx recall can no longer see those sessions — the data is gone.

The cleanupPeriodDays Setting

Claude Code exposes a cleanupPeriodDays setting in its configuration (~/.claude/settings.json) that controls retention:

Value Behavior
30 (default) Transcripts older than 30 days are deleted
60, 90, etc. Extends the retention window
0 Disables writing new transcripts entirely — not "keep forever"

Setting cleanupPeriodDays to 0

Setting this to 0 does not mean "never delete." It disables transcript creation altogether. No new JSONL files are written, which means ctx recall sees nothing new. This is rarely what you want.

Why Journal Export Matters

The journal export pipeline (Steps 1–4 above) is your archival mechanism. Exported Markdown files in .context/journal/ persist independently of Claude Code's cleanup cycle. Even after the source JSONL files are deleted, your journal entries remain.

Recommendation: export regularly — weekly, or after any session worth revisiting. A quick ctx recall export --all takes seconds and ensures nothing falls through the 30-day window.

Quick Archival Checklist

  1. Run ctx recall export --all at least weekly
  2. Enrich high-value sessions with /ctx-journal-enrich before the details fade from your own memory
  3. Lock enriched entries (ctx recall lock <pattern>) to protect them from accidental regeneration
  4. Rebuild the journal site periodically to keep it current

Tips

  • Start with /ctx-recall inside your AI assistant. If you want to quickly check what happened in a recent session without leaving your editor, /ctx-recall lets you browse interactively without exporting.
  • Large sessions may be split automatically. Sessions with 200+ messages can be split into multiple parts (session-abc123.md, session-abc123-p2.md, session-abc123-p3.md) with navigation links between them. The site generator can handle this.
  • Suggestion sessions can be separated. Claude Code can generate short suggestion sessions for autocomplete. These may appear under a separate section in the site index, so they do not clutter your main session list.
  • Your agent is a good session browser. You do not need to remember slugs, dates, or flags. Ask "what did we do yesterday?" or "find the session about Redis" and it can map the question to recall commands.

Journal Files Are Sensitive

Journal files MUST be .gitignored.

Session transcripts can contain sensitive data such as file contents, commands, error messages with stack traces, and potentially API keys.

Add .context/journal/, .context/journal-site/, and .context/journal-obsidian/ to your .gitignore.

Next Up

Persisting Decisions, Learnings, and Conventions →: Record decisions, learnings, and conventions so they survive across sessions.

See Also