Phil Johnston, II
Developer Relations
I've been building two apps with AI agents doing most of the heavy lifting. One is a ham radio propagation tool for iOS. The other is a game for my daughter. Both started the same way: I gave the agent a prompt, it built something, and the result was fine. Just fine. Functional, forgettable, and looking like every other AI-generated app out there. Then I changed one thing, and the entire quality of the output shifted…
Most people iterating with AI agents hit the same wall eventually. The output quality plateaus. You tweak prompts, refine skills, adjust context. The results are fine. But they stop getting better. I hit that wall recently, and I think I found a technique worth sharing.
Most developers using AI for coding dump everything into one chat window and hope for the best. I think I may have found a better way. A workflow that treats different AI tools…
AI can generate code, but it can’t replace the human eye for design. Game artists, trained in composition, color theory, and spatial storytelling, are uniquely positioned to shape the next generation of software interfaces.


I asked Claude Code to run a usage report on my Claude Code sessions from the last two weeks. Around 200 sessions, 235 hours, across game dev, mobile apps, multi-agent tooling, and infrastructure. I'm publishing the redacted version because it's a useful snapshot of how one person is actually working with AI coding agents in 2026, and because if you sell dev tools, your buyer might look more like this than you think.