Notes on Agentic Coding
Agentic coding tools are here, whether you’ve wanted them or not. They’re going to be part of your company’s toolset, I don’t think there’s any going back after your CFO sees that they can buy 4 seats for your team versus adding headcount (which I have thoughts about but therapy isn’t until next Tuesday).
After having played with all the new tools from the past week, GitHub Copilot Agent Mode, Codex and Jules. These are generally the components that I wanted from AI based coding tools. Ya sure, hangout with me while I’m coding and let me ask you what the difference between guard let
and if let
in Swift, but also let me give you tasks I honestly don’t want to do for you to finish for me or at least headstart them.
I tried to cobble a tool like this together with Ollama and OpenAI’s Codex CLI, but I lacked the environments to execute these tasks in and making agents write commits for version control is a PITA. But I keep thinking you need strong guardrails around these tools beyond a “human in the loop” approach.
The Guardrails
- Documentation and context documents
- Well-written and predictable tickets/tasks to execute
- Really good CI/CD pipelines + testing in general
Side note: These guardrails that make agentic coding tools better are also the same things that make other human developers better at their own jobs.
Other random note is that English will be a waste of a communication vessel when agents need to communicate with each other because they have way more efficient tools of communication with each other that will transcend the human experience that haven’t needed to develop over centuries. If you actually care about that read The Atomic Human by Neil Lawrence.
Documentation and context
Consistent documents, like a README.md and a CONTRIBUTING.md file. The .github/copilot-instructions.md
file is also good but I don’t like there’s now 7 different tool-specific configurations just like it. These documents need to be clear and exhaustive and there’s very little excuse not to have these in a project at this point, not in a world where you can prompt “Read the code in this project and outline a README doc”, plus like DeepWiki exists.
Adding little Mermaid diagrams to context documents that lay out the project’s architecture and more info about where to go find documentation about the key dependencies like llms.txt (a cool trend).
Tickets
Some engineering manager once said to me “Write tickets like you’re going to drink heavily and need to reference them again.” He then told me about his custody situation and all his years working with Java and then all those threads suddenly came together. But honestly — very good advice. Writing tickets with clear acceptance criteria and what problem you’re solving and – if it’s a bug – actual reproduction steps. Using the .github/ISSUE_TEMPLATE
directory might really help because on top of the GUI it exposes on GitHub, you can also give those configs to Claude or ChatGPT and be like “Help me write issues with this template.”
Tickets need to be thorough, full of links and details.
Tests that run in CI
Testing is always meant to shift the burden of proof, so in this example, all the code that your agentic coding tool of choice wrote BROKE THE TESTS. The auto-generated PR is blocked and because the build step broke and the tests FAILED.
The same methods that made PRs between human contributors accountable to each other also safeguard against regressions and bugs that will inevitably be created by agentic coding tools.
TLDR: The things that made your team good will also benefit the robots who are trying to take your jobs.
Also let’s say some regulatory body comes in and bans all these tools or these tools become less desirable in other ways, the guardrails that you need to make them successful will benefit your project beyond them.