Video Version
The internet is full of posts about building with AI. But 99% of them are landing page demos, proof-of-concepts and trivial tasks.
It's rare to see expert developers sharing how they build production apps used by real people with AI.
But that happened recently. Mitchell Hashimoto, the Ghostty creator, published a post about his AI usage while implementing a complex feature. Building a terminal is no easy task, so it made me super curious to see how he uses AI.
I've shared the full transcript of every agentic coding session from implementing the unobtrusive Ghostty updates and provided commentary alongside about my thinking and process. Total cost: $15.98 over 16 sessions. "Vibing a Non-Trivial Ghostty Feature" https://t.co/kRUSWyMSyW
— Mitchell Hashimoto (@mitchellh) October 11, 2025
As expected, the post was really informative and gave me a couple of ideas on how I can improve my AI usage. In this post, I'll highlight the bits that stood out to me.
#1 Planning with OpenAI's o3
Mitchell starts by prompting the AI to create a plan instead of jumping straight to coding. The idea is that the planning step helps you and especially the agent to better understand the task.
Even before AI, we knew planning saves you headaches. Although we skip it far too often. However, that's even more important now since the AI agent uses it for context.
I've also observed that he uses OpenAI's o3 model for planning through the Amp's Oracle feature. According to Amp's documentation, o3 is "impressively good at reviewing, at debugging, at analyzing and at figuring out what to do next" (source).
That made me realize that I use one model for all tasks. For example, I got used to Sonnet and I keep using it for planning, coding and documentation. From now, I'll experiment with models and try to pick "the best one for the job".
#2 Save the Plan
Once the planning is done, the final plan is saved in a markdown file that can be referenced later. This way you can easily pass context to the AI agent and ask it to implement a given task.
#3 Shipping Policy
Mitchell's shipping policy is to only ship code he understands. If the AI is able to figure out something he can't, he uses it as a learning opportunity. He studies the code and tries to learn from it. In the cases where he doesn't fully understand it, he discards the code and tries to implement it himself.
This may seem like common sense, but it's very important to re-iterate it. It's quite tempting to hit "accept" for code that is seemingly inoffensive and move on. Personally, I got bitten by this and learnt my lesson. What was a seemingly small and inoffensive change caused a big issue.
#4 Incomplete Code with TODOs
I've also observed that he uses the fill-in-the-blanks technique. That is, providing incomplete code, such as functions with names, parameters and TODO comments, and asking the AI to implement the code.
According to his post, that works very well in his case. I've seen other people swearing by this technique as well, but I've never tried it. I guess it's time to give it a go.
#5 Last AI Check
Besides manually reviewing all the generated code before shipping, he asks the AI about things he may be missing. He does this even when he writes all the code manually.
Conclusion
I really enjoyed the post because it provides a rare look into how an expert developer uses AI to write code. I’ve already taken away a few ideas that I'll implement in my workflow.
I highly recommend checking out his post. It's really good and also includes the transcripts from the AI tool he uses. So you can see the way he uses it.
If you came across other similar posts or videos, please drop them in the comments. I’m always looking to learn from others.