Indie AI engineer in Toronto. I build things, ship them before they're perfect, and write about what I learn along the way.
Currently obsessed with the question: how do you know if AI output is actually good? Not "feels good" — measurably good, against standards you define yourself.
U+22A8 — My answer to the question above. I'm building scoring models — standards of judgment you train from examples, not rules. No LLM in the loop, deterministic, sub-second. Live API, Claude Code plugin, go break it.
Stream of Consciousness — A Claude Code plugin for brains that don't do well with to-do lists. Things flow in, decay over time, and either get resolved or restreamed. No tags, no priorities, just vibes and a clock.
interference — Where I think out loud about AI, building, and shipping as a one-person shop.
tarasyanchynskyy — The personal stuff.
Building in public, one bit at a time.



