Field Notes on Vibe Coding: What I Built and What I've Learned
I’m not a software developer. My programming background is in data engineering and data science — data processing, predictive modeling — not in building applications or user-facing tools. Until recently, that distinction mattered a lot. Knowing how to wrangle data doesn’t tell you much about how to structure a plugin, design a UI, or wire up a frontend framework.
What changed is that AI coding assistants largely closed that gap. It’s now possible to direct the construction of a working application without being the one who knows how to build it from scratch — more like an architect working with a very fast, very literal contractor than like a programmer.
That said, I don’t think technical background is irrelevant to vibe coding. If anything, having some programming experience makes it easier to evaluate what the AI produces, catch structural problems early, and communicate precisely about what you want. The floor is lower than it used to be — you can build real things without deep software experience. But the ceiling is higher if you bring some technical intuition to the direction. That framing — director with domain knowledge, not passive prompter — is the orientation I’ve found most useful.
Here are what I’ve built over the past weeks, and what I’ve learned about how to do it well.
What I’ve built
A ThinkCell copycat PowerPoint plugin. ThinkCell is the standard tool for building consulting-style charts in PowerPoint — waterfall charts, Gantt charts, structured slide layouts. I built a plugin that replicates a subset of that functionality for my own use, without the license cost or the dependency on a third-party tool.
A Word Solitaire game with custom categories. A browser-based word game where the numbers and suits of classic Solitaire are replaced by words and pictures, but with category sets built around life sciences and AI terminology. I built it as a fun way to quiz myself.
AI News Watch. A tool that monitors and surfaces AI developments across research, industry, and policy — available in the Resources section of this site. Built to solve my own problem of staying current without spending an hour a day on it.
This website. The site you’re reading was built with AI assistance from the ground up. I had opinions about what I wanted it to look and feel like; I didn’t have the frontend skills to execute those opinions from scratch. The AI handled the implementation; I handled the direction and the design decisions.
These projects range from a weekend experiment to something I use daily. What they share is that none of them would exist without AI assistance, and all of them reflect decisions I made — not decisions the AI made for me.
What makes the output better
After enough iterations, some practices have become close to non-negotiable for me. These aren’t rules about when to use AI or what to build — they’re about how to work with it in a way that produces output you can actually rely on.
Be explicit about what good looks like. The single highest-leverage thing you can do before starting is to describe the target state with enough specificity that you could evaluate whether the AI hit it. Not “build me a dashboard” but “build me a dashboard that shows X, Y, and Z, where X is sorted by date descending, where the color scheme uses these values, and where clicking a row does this.” Vague prompts produce output that requires extensive correction. Specific prompts produce output that requires refinement. The difference in total time spent is significant.
Ask for options before committing to an approach. When you’re early in a build and the design isn’t settled, ask the AI to suggest two or three approaches before it implements anything. This sounds slower but isn’t — it surfaces tradeoffs you wouldn’t have thought to ask about, and it’s far cheaper to choose between described approaches than to rebuild after discovering the first one had a flaw. I’ve made the mistake of letting the AI run with a design choice that seemed fine in the moment and turned out to be load-bearing in ways I didn’t anticipate.
Test edge cases deliberately. AI-generated code tends to work well in the happy path and break in the edges. The plugin handles standard inputs cleanly; it behaved oddly when the input was empty, or when a user clicked in an unexpected order, or when the data had a shape that wasn’t covered by the examples I gave. Testing these scenarios explicitly — and feeding the failures back — is the difference between a demo that works and a tool that works. Don’t assume coverage you haven’t checked.
Understand the limitations and design around them. AI has real constraints that I’ve had to learn to manage. It doesn’t maintain context reliably across long sessions or complex projects, some information in its codebase may be outdated, and troubleshooting issues that are multimodal (beyond text) are particularly hard. Spending more time with these tools will help you become more familiar with these limitations, and allow you to define approaches, such as better documentation practices, test case design, to overcome them. Compensating for that imperfection is part of the craft.
The underlying orientation
The practices above share a common thread: they’re all about being a more deliberate director. The AI is not a search engine you query and evaluate. It’s a collaborator that responds to how clearly you can articulate what you want, how thoroughly you can test what it produces, and how well you understand the shape of the work you’re asking it to do.
The gap between a builder who gets reliable output and one who gets frustrating output is rarely about technical skill. It’s almost always about clarity — clarity of intent, clarity of evaluation criteria, and clarity about where the human judgment has to stay in the loop.