Phil Johnston, II
Developer Relations
Most people iterating with AI agents hit the same wall eventually. The output quality plateaus. You tweak prompts, refine skills, adjust context. The results are fine. But they stop getting better. I hit that wall recently, and I think I found a technique worth sharing.
Most developers using AI for coding dump everything into one chat window and hope for the best. I think I may have found a better way. A workflow that treats different AI tools…
AI can generate code, but it can’t replace the human eye for design. Game artists, trained in composition, color theory, and spatial storytelling, are uniquely positioned to shape the next generation of software interfaces.
LLM-powered agents can simulate QA, security, and architecture expertise well enough to catch 80% of the issues a human reviewer would. The question is whether that last 20% still justifies the cost of a dedicated specialist.


I've been building two apps with AI agents doing most of the heavy lifting. One is a ham radio propagation tool for iOS. The other is a game for my daughter. Both started the same way: I gave the agent a prompt, it built something, and the result was fine. Just fine. Functional, forgettable, and looking like every other AI-generated app out there. Then I changed one thing, and the entire quality of the output shifted…