The One-Line Prompt
TL;DR:
>10 days since last post, still haven’t poured media
>security audit finds tc-lab Flask API publicly accessible, API key is “$LAB_API_KEY”
>Claude demonstrates by toggling the heater from the public internet
>sales bot goes live, 9/9 correct classifications day one
>Romanian e-commerce site launch push
>set up rclone backups, first run saturates office upload, internet crawls
>WSL traffic invisible to Windows monitoring, diagnose from wrong side for 20 minutes
>fix it, everyone thanks me, don’t mention I caused it
>Anthropic Advanced Claude Code Patterns webinar, show up an hour early
>start an aubergine emoji campaign on the housekeeping message, 85 by end
>Q&A is all IDE questions
>mfw I left the industry 10 years ago because I hated writing code by hand
>realise my 3,021-line CLAUDE.md is caused by one prompt: “add all lessons to claude.md”
>build a replacement: librarian subagent + /save skill + activity hook
>build /blog skill same afternoon
>you’re reading the first post it produced
Ten days. Still haven’t poured media. The room is monitoring baseline while I ship software for every other project I’m running. Here’s what happened, in order.
March 17: The Security Sweep
I ran a VPS security audit across all projects. Routine hardening pass — firewall rules, SSH config, open ports, service exposure.
Then I checked tc-lab.
lab.egc.land — the Flask API the Raspberry Pi agent uses to serve sensor data and control the Shelly relay — was publicly accessible. The only authentication was a hardcoded API key: $LAB_API_KEY. Claude demonstrated the vulnerability during the audit by curling the heater toggle endpoint from the public internet. The heater turned on. Then off. One HTTP request each way, no credentials beyond guessing a predictable key.
Nobody was monitoring the state change because the alert system trusts the API. A weak API key is worse than no protection because it creates a false sense of security. The system was “authenticated” in the config but trivially compromised.
Fixed with Cloudflare Access — proper authentication before the request reaches Flask. The Flask logs showed no evidence of exploitation, but “we got lucky” is not a security posture. The project’s CLAUDE.md has rules about API security. They’re somewhere around line 800 of a 1,256-line file. Whether Claude was still reading that far is an open question.
March 17-22: The Shipping Week
A week of commercial compliance work across three projects — GDPR paperwork, a Romanian e-commerce site launch push, and rclone backups to Google Drive. The rclone setup was one line. It would cause problems later.
March 23: The Sales Bot Goes Live
The sales bot deployed to the VPS — pulls enquiries via IMAP, qualifies against CRM data, drafts responses for human approval.
First lesson within hours: outbound email failed silently. Always test from the actual deployment environment. That lesson would get appended to CLAUDE.md at the end of the session, joining the pile. Whether it would ever be read again is a different question.
March 24, Morning: Production Stability and the Network Incident
Two things happened before lunch.
Sales bot stability. The first daily digest email arrived. Nine emails. Nine correct classifications. Three spam correctly skipped, three staff CRM updates parsed into contacts and opportunities, one maintenance enquiry, one customer confirmation. Every inbound email from day one, handled correctly.
But running in production exposed gaps. Seven stability measures in one session — write locks, health checks, backups, monitoring.
The backup incident. The first scheduled rclone run uploaded 5.66GB out of 6.9GB before the OAuth token expired and the Google API quota kicked in. Meanwhile, a second cron invocation started — the first was still running — and both processes saturated the office upload connection. Latency hit 950ms. The internet slowed to a crawl for everyone in the building.
The traffic was invisible to Windows monitoring tools. rclone was running in WSL, and WSL network traffic goes through a virtual ethernet adapter that Windows doesn’t surface in Task Manager or Resource Monitor. I spent 20 minutes diagnosing from the Windows side — checking router stats, running speed tests, suspecting the ISP — before thinking to check inside WSL. Two rclone processes, maxing the upload pipe.
Fixes: bandwidth limit (--bwlimit 5M), process stacking prevention (pkill before new runs), reduced concurrent transfers, excluded log files from sync. Everyone thanked me when the internet came back.
March 24, Afternoon: The Webinar
Anthropic ran a webinar: “Advanced Claude Code Patterns.” I joined an hour early. Not for preparation — to secure a top-three placement of an aubergine emoji on the housekeeping message. By the end, there were 85 aubergines alongside 570 hearts and 291 thumbs up.
570+ attendees from 30+ countries. They opened with a poll: “What level of Claude Code integration do you have?” I picked “deep integration.” That put me in the 15%.
Three architectural patterns that matter:
CLAUDE.md hierarchy. Root CLAUDE.md under 200 lines. Scoped rules files in .claude/rules/ with path-based frontmatter — CSS rules only load when editing CSS, deploy instructions only load when deploying. My current state: one project has a 3,021-line CLAUDE.md. Another has 1,256 lines. Everything loads every session. Instruction adherence degrades with file size. I’d been adding to these files after every session for three months. I knew exactly how they’d gotten that big. I just hadn’t named the problem yet.
Hooks. Shell scripts that fire on specific events — before a tool runs, after a tool runs, when the user submits a prompt. The key insight: anything that must happen every time belongs in a hook, not a prompt instruction. Two of my projects have a “MANDATORY: Test After Every Change” section in their CLAUDE.md files. Claude follows it most of the time. As a hook on file edits, it fires every time — deterministic enforcement instead of probabilistic compliance.
Subagents and agent teams. Isolated Claude instances with scoped tool access. An Explore agent gets Read, Glob, Grep — it literally cannot edit files. Each runs in its own context window, returns condensed results. Agent teams are the multiplayer version — parallel workstreams coordinating through a shared task list. 66% of attendees picked agent teams as the pattern they were most excited about. So did I — it’s the closest thing to having a team when you’re a solo operator.
The Q&A was almost entirely about IDE integration. “How does this work with VS Code?” “Can I see the agent’s changes in my editor?” I submitted a question about why the webinar player didn’t have a volume slider. It didn’t get approved. I gave the webinar five stars at the end and “Good overview of the tools, but I don’t use GitHub or IDEs so half the content wasn’t for me.”
The IDE fixation misses the point. When the agent reads, writes, navigates, searches, and refactors, the IDE collapses to a file viewer. A fancy notepad with colours. I use VS Code roughly as that. Most of the time I don’t even have it open — I work in PowerShell terminals running WSL. Claude Code is the entire interface. I direct, it writes.
This isn’t a flex. It’s an accident of biography. I never liked writing code by hand — the syntax, the manual typing, the tooling overhead. I didn’t stay in the industry. When AI code generation arrived, the part I’d always hated got automated and the part I was always good at — architecture, systems thinking, specifying intent — turned out to be the entire job.
March 25, Morning: 48-Hour Review
Two days of real traffic through the sales bot surfaced five bugs — thread detection, notification routing, urgency flagging, all slightly wrong with real data versus the development environment. All five fixed.
The architectural insight from this: the bot that processes emails should not evaluate its own work. A separate reviewer agent — read-only, running on a different schedule — should audit what the bot did and catch missed emails, classification errors, and dropped threads. That’s directly from the webinar’s subagent patterns. The entity that acts should not be the entity that judges.
Romanian site final push. Three sessions in one day — legal compliance fixes for Romanian commercial law, product catalogue sync, and a cookie banner investigation that revealed the real problem: no documentation distinguishing code-deployed changes from database changes.
March 25, Afternoon: The One-Line Prompt
Every project I run has the same problem. After every session, before every context compaction, for three months, I’ve been running this prompt:
“git commit and save all changes made since the last summary to docs/context-[timestamp].md and add all lessons learned from mistakes to claude.md”
The idea came from a template collection I’d downloaded back in September — a set of pre-built Claude Code agents and commands. One of them was context-save: a dedicated context-manager agent with six categories, structured output, timestamped versioning. I tried using it. A third of the time it overwrote the previous file. Another third it named the file after the session’s content. The remaining third it added a timestamp in a different format each time. Three months of inconsistent saves before I gave up and replaced it with one line.
The one line fixed the naming and killed the triage. By January I was running it after every session in every project. 156 context docs across 12 projects by March.
That’s the rot. The 3,021-line and 1,256-line files didn’t happen in a week. They grew a few lines at a time, every session, and I never noticed because the prompt ran at the end when I was already done thinking. The security rules that should have caught the lab API exposure? Buried. The deployment patterns that should have prevented the sales bot’s silent email failure? Somewhere in the pile. Every lesson I’d learned was being saved and none of them were being used.
The March 14 post documented the same pattern from the other direction. Claude proposed 400 lines of bash analysis scripts for lab reporting. I told it to read its own blog first — the architecture principle is “thin tools for I/O, all business logic in the prompt, no decision-making in application code.” After reading the project’s own principles, Claude rewrote the plan as 80 lines of data pulling. The one-line session-end prompt is the same antipattern: it encodes no logic, has no structure, and delegates zero decisions. It just says “add everything” and Claude adds everything.
The webinar made the fix obvious.
Building /save
Three components, all installed in ~/.claude/ — user-level, available in every project without symlinks:
Librarian agent — a read-only subagent with access to Read, Glob, and Grep only. It receives a session summary, reads every rules file and the root CLAUDE.md, classifies each candidate as CONTEXT or LESSON, checks for duplicates, and returns a structured recommendation. It never writes files. When it’s unsure, it asks — CLEAR items proceed on approval, AMBIGUOUS items present options, NEEDS CLARIFICATION items ask a specific question. No silent guessing.
/save skill — a six-step flow triggered by typing /save. Check environment, compile summary, delegate to librarian, present triage with interactive questions, resolve answers, execute writes and commit. The skill distinguishes between project-level files (committable) and user-level rules files (not git-tracked).
Activity hook — a UserPromptSubmit hook that checks three signals: transcript file size over 500KB, message count over 30, or git diff exceeding 10 files or 200 insertions. If any threshold is crossed, it nudges: “Consider running /save before /compact.” Claude Code’s hooks don’t expose context window percentage, so the three-signal proxy is the best available approximation.
I built all three in a single Claude Code session. The session started by researching how the systems actually work — spawning three subagents in parallel to check the agent, skill, and hook documentation. This caught wrong assumptions before any code was written. The spec I’d drafted used field names that don’t exist in the actual system. Three parallel research agents, three corrections, before a single file was created.
Then I ran /save for the first time and it broke. Seven bugs — empty stdout, greedy regex, set -e in hooks, nudge spam, counter resets, stat portability, regex anchoring. All found and fixed in one session.
One surprise: custom agents from ~/.claude/agents/ can be invoked through natural language delegation — “use the librarian agent” — even though the Agent tool’s subagent_type parameter rejects custom names. The debug log confirmed it. It worked in one session, failed in the next, then worked again. Session-dependent, not a platform limitation.
Then /blog
Same afternoon, same pattern. A /blog skill that scans all projects under ~/ for context docs since the last blog post, asks about topic and angle, gathers additional context in a loop (paste text, provide file paths — chat exports from Claude Desktop, notes, whatever), reads the source material, calibrates voice by reading existing posts, drafts, builds locally for preview, and only commits and deploys after explicit confirmation.
The discovery step found 24 context docs across 7 projects and 61 commits since March 15. That’s the material for the post you’re reading now.
Back to the Lab
The same architectural lesson keeps appearing in different forms. March 14: 600 lines of pipeline code deleted, replaced with a 179-line task prompt. March 24: the webinar formalises this as instructions (business logic) + hooks (deterministic enforcement) + subagents (isolated reasoning). March 25: the one-line session-end prompt — no structure, no gates, no separation of concerns — replaced by a librarian that classifies, a skill that orchestrates, and a hook that reminds. The classifier can’t write files. The skill can’t skip approval. The hook can’t block work. The fix for three months of rot is three files and a shell script — roughly the same amount of structure the original template had, that I’d stripped away in September because it looked like overkill.
The 3,021-line CLAUDE.md belongs to a WordPress block theme. But tc-lab has the same problem at smaller scale — hardware control rules, experiment design principles, data analysis patterns, and dashboard conventions all in one file, all loaded every session, whether I’m adjusting the thermostat or designing a growth protocol. Refactoring into scoped rules files is the first concrete application of the webinar patterns. Hardware rules load when editing hardware code. Experiment design loads when planning experiments. The rest stays out of the way.
The lab already has the separation of concerns the webinar formalised. The March 15 post called it “the daemon is the muscle, the agent is the brain” — the thermostat daemon controls temperature directly, the inner agent monitors and audits on a 15-minute cycle. That’s the subagent pattern before I had the vocabulary for it. But the inner agent’s safety checks are still probabilistic — it caught a dangerous 30°C config change within 60 seconds, but “most of the time” isn’t good enough for hardware that controls a heater. A PostToolUse hook on relay commands would make that deterministic.
When real experiments start running, the parallel concerns — temperature profiles, growth measurements, hardware health — map directly to agent teams. Each scoped to its own context, each unable to interfere with the others. The tooling I built this week for managing blog posts and session knowledge is practice for managing a lab.
Where Things Stand
The lab is monitoring baseline. The sales bot is processing real leads with a reviewer agent planned. The Romanian site is nearly there. The blog has been silent for 10 days but the tooling to write it is now better than the tooling to ignore it.
Sixty-one commits across seven projects. Three new Claude Code tools built. Seven bugs found and fixed. One security incident caught. One office internet slowdown caused and silently fixed. One aubergine emoji campaign completed successfully.
Back to it.