Open Source Projects Are Cached Agent Output
I have mostly been writing code like this lately:
- Spend a lot of time in plan mode to generate a truly good plan.
- Have the agent generate a comprehensive
- [ ] TODOlist. - Run that TODO list in a wiggum loop to generate code for the project.
The whole process can be time consuming and expensive. It takes time to iterate on the prompt. Sometimes you get to step 2 or 3 and realize your plan was wrong in some fundamental way. It’s expensive because this all takes a lot of tokens.
Consequently, for medium-sized tasks, I tend to skip the wiggum loop and/or the TODO list steps. For small tasks, I abandon the plan all together and simply try to zero-shot or one-shot the code.
This process has changed how I think about open source software. I now see open source projects as materialized agent output–cached code. Its purpose is to save me time and tokens.
At the most extreme case, you have something like Attractor. StrongDM (of dark software factory fame) open sourced the project a few months ago. The repository has no code. It’s simply a series of “NLSpec” prompts. They define NLSpec as: NLSpec (Natural Language Spec): a human-readable spec intended to be directly usable by coding agents to implement/validate behavior. It’s really just a plain-english prompt.
The Attractor README.md says simply:
Supply the following prompt to a modern coding agent (Claude Code, Codex, OpenCode, Amp, Cursor, etc):
agent> Implement Attractor as described by https://github.com/strongdm/attractor
With Attractor, StrongDM has taken care of step 1 in my list above. They have done the work of generating a good plan and writing a good prompt. But you still have to pay for step 2 and step 3 yourself–both with time and money.
A less extreme case is pi–a minimalist AI agent toolkit. The repository has plenty of code in it, but it has been built with a very opinionated philosophy. The maintainer describes his approach in, What I learned building an opinionated and minimal coding agent:
My philosophy in all of this was: if I don’t need it, it won’t be built. And I don’t need a lot of things.
…
pi does not and will not support MCP. I’ve written about this extensively, but the TL;DR is: MCP servers are overkill for most use cases, and they come with significant context overhead.
…
pi’s bash tool runs commands synchronously. There’s no built-in way to start a dev server, run tests in the background, or interact with a REPL while the command is still running. This is intentional. Background process management adds complexity: you need process tracking, output buffering, cleanup on exit, and ways to send input to running processes.
…
I just want to keep this focused and maintainable. If pi doesn’t fit your needs, I implore you to fork it. I truly mean it.
This isn’t a particularly novel philosophy, to be honest. Keeping code compact, extensible, and opinionated is just good software practice. Rails has had an opinionated design philosophy decades. SlateDB’s CLEAN_SLATE.md has a similarly minimalist philosophy. That last point in the quote above is novel, though.
Forks have become very cheap in the age of AI. This is one of staff engineer lessons I’ve had to unlearn. Forks are no longer a sign of failure or a maintenance burden. They are a feature.
Rather than providing a full end-to-end solution, it’s OK to provide a kernel–be that a prompt, a plan, or some code–that saves users time and tokens.