Every morning, yesterday's me waits — already organized
Earlier this year, Karpathy shared a pattern for building a personal wiki with LLMs. The core idea is simple: throw any source at an LLM — PDFs, meeting notes, chat logs — and it creates wiki pages, adds cross-references, and catches contradictions. The stuff people find too tedious to organize themselves, the LLM handles for them. So for the first time, the wiki doesn't die — it keeps growing.
I built and use this myself. I have a personal wiki with 121 markdown files, and I work out of it every day. The structure is laid out in the half-memex manifesto.
This post is about automating one of that wiki's sources. A pipeline that nightly pushes Claude Code session logs into the wiki automatically.
I closed my Claude Code session last night and went to sleep. In the morning, I opened my notes folder and found a few files had changed. Files I hadn't touched.
I opened them. There was a record of what I'd decided the day before, what I'd tried and abandoned, and why I'd gone the direction I did. Not a Slack mention, not a commit message — the actual words I'd said to Claude, organized right there.
Where does this come in handy:
- When I need to find a hypothesis I missed while debugging yesterday
- When I need to find what I decided the last time I hit the same bug a month ago
- When writing a weekly report, I just pull the thread from there
It's more accurate than relying on memory, and faster than digging through Slack.
The note file looks like this. A chunk that got automatically appended after last night's session:
## 2026-04-07 D'CENT App Exploration
- biometric screen captured as black screen due to carousel animation
- fix: route through accessibility tree, read DOM level instead of UI dump
- updated TRANSITIONS.md: `AuthPrompt → tap Cancel → PasswordEntry`
- decision: L2 (interaction) done. Next session goes to L3 (real transactions).
(auto-sync from session 9ad78433, 2026-04-07 23:12)This doesn't just go into one file — it gets distributed as fragments across several related files. Finish one session, and a few topic pages update together.
Two prerequisites. This post is for people who use Claude Code CLI every day. If you only use web Claude, the files this relies on don't exist. And it's for people who already have a personal notes system — Obsidian, a markdown folder, whatever. If you don't have a note system, you're better off stopping here.
What accumulates where
Claude Code stores every session to disk as JSONL.
- Path:
~/.claude/projects/{cwd-encoded}/{session_id}.jsonl - Structure: one file per session, one line per turn
- Contents: user messages, assistant responses, tool calls, file edits, Bash commands — all in chronological order
Anyone reading this right now probably has several gigabytes already stacked under ~/.claude/projects/. If you work in Claude Code all day, this is the most accurate record of what you did that day. But normally, nobody looks at it.
The pipeline
Five steps from the moment a session ends to the moment notes are updated.
1. Set the hook. In ~/.claude/settings.json, wire up two hooks: Stop and SessionEnd. Stop fires every time Claude finishes a response; SessionEnd fires when the session itself closes. Both are needed because if it crashes, Stop won't fire. Both run with async: true, so the main session feels nothing.
"Stop": [{ "hooks": [{ "type": "command", "async": true, "timeout": 5,
"command": "~/.claude/hooks/memex-ingest.sh" }] }]2. Spin up a separate instance. The hook script launches claude -p in the background and returns immediately. -p is non-interactive — runs once and exits. Using nohup + disown detaches the process from the hook, so the main session has no idea it's running.
3. Make it read the rules first. This part matters: if you just tell an LLM "organize this," it writes however it wants. Ignores your file format, ignores everything. So I baked this into the first line of the prompt:
## Read these before starting anything
1. CLAUDE.md — overall structure
2. wiki/CLAUDE.md — note graph rules
3. app-memory/CLAUDE.md — app interaction memory rules
4. projects/CLAUDE.md — project layer rulesMaking it read the files first is far more effective than copy-pasting the rules directly into the prompt. Reading the files brings the rules in alongside their surrounding context.
4. Classify and append. The background instance reads the session JSONL and, following the rules, attaches fragments to the relevant note files.
5. Add a safety net. There are cases where the hook doesn't fire — crashes, network issues, unknown reasons. So I registered a launchd job in ~/Library/LaunchAgents (macOS's equivalent of cron). It runs every 30 minutes and picks up any JSONL modified within the last 35 minutes.
find "$PROJECTS_DIR" -name "*.jsonl" -mmin -35 \
-not -path "*ObsidianVault*" | head -20The -not -path "*ObsidianVault*" is critical. Sessions that were running inside the notes folder need to be excluded. Without this, a session created during ingest gets ingested again — infinite loop.
Why this way
Why a separate instance. Running ingest inside the main session mixes "the me writing code" and "the me organizing notes" into the same context. They contaminate each other. It's not a token problem — it's a separation of concerns problem.
Why read the rules files every time. Give an LLM freedom and it writes in its own style. Making it read the rules file first is what keeps the format consistent.
Why add launchd on top. Some sessions will always miss the hook. The 30-minute sweep catches them.
Limitations
- No sensitive data filter. Passwords, API keys, and internal information can end up mixed into session logs. Right now I review manually.
- Long sessions lose the beginning. Passing a 100-turn day in one shot means the front half gets diluted. Better to split long sessions before passing them.
- Automatic doesn't mean accurate. The bot sometimes attaches a fact to the wrong file. Having it read four rules files first reduced this a lot, but not to zero.
Tomorrow morning too
The hook was running while I wrote this post. It read the just-finished session, decided there was nothing to update, and moved on.
Tomorrow morning, a few note files will have changed again. This pipeline is one piece of a larger structure — the whole thing is at half-memex.