Notes on AI-assisted software engineering.
I test the newest AI coding workflows (Claude Code, copilots, agent patterns) and write down what actually improves speed without sacrificing correctness. This is a simple, fast, text-first site: fewer announcements, more receipts.
Latest
short posts • change logs • links
Research log
what I’m testing right now
Claude Code patterns
Making outputs reviewable and safe in real repos.
- “Patch hygiene”: tiny diffs, clean commits, explicit file scope
- Context strategy: repo map, pinned constraints, rolling summaries
- Verification: tests first, static checks, “prove it” prompts
Agent reliability
Where agents help vs. where they cause damage.
- Routing: classify → choose toolchain → execute
- Guardrails: permissions, allow-lists, audit trails
- Evals: golden tasks + regression suite + cost/latency budgets
LLM “workbench”
A repeatable environment for comparing tools and models.
- Same tasks, same repo, fixed constraints
- Track: time-to-first-solution, bug rate, PR review time
- Notes: what breaks, what scales, what’s hype
Personal rule
Borrowed from the best blogger-engineers: write as you go.
- Capture prompts + diffs + outcomes
- Post the useful bits, not the marketing
- Prefer “here’s what happened” over “here’s my opinion”
Projects
things I’m building
About
why this exists
I’m a software leader in P&C insurance technology, obsessed with practical automation. I use AI daily and I’m documenting what works: Claude Code runs, tool comparisons, repeatable agent workflows, and how to adopt this stuff safely in teams.
(Golf + Ireland posts may show up later, but this front page stays AI-focused.)
Subscribe
optional
If you want updates, I can add a simple newsletter signup here (Buttondown/Beehiiv/Mailchimp), plus an RSS feed when the post structure is in place.
For now: email me if you want a heads-up when new notes drop.