Skip to content
vibecode
Go back

I Didn't Know What I Was Building

· 4 min read

After shutting down all five agents. My coffee had gone cold.

What the hell went wrong?

Traced it back from the beginning.

Not the AI’s fault. “Make a date format function.” It did. Good work. “Parse the config.” It did. Clean job. AI did exactly what I asked. Every single time.

Not the prompt’s fault either. “In this format.” “This data.” “Process it like this.” Specific enough.

So why did I end up with ten thousand lines of spaghetti? Why did the same function get built five times? Why was there a security hole?

If it’s not the AI and it’s not the prompts, what is it?


I didn’t know what I was building.

I thought I did. “A tool that connects three computers into one.”

But what does that mean?

Sync files? Share screens? Distribute processes? Send commands remotely? All of the above? Some of the above? In what order?

“Connect three computers” is a wish, not a design.

At what point is it “done”? I had no answer to that question.

No answer meant this kept happening:

Monday: “Let’s do file sync first.” Wednesday: “Wait, shouldn’t screen sharing come first?” Friday: “Remote commands are the most urgent.”

Direction changed every time. AI built in that direction every time. Every time convincing. Every time different. Every time slightly wrong.

Why is this dangerous?

AI doesn’t say “Hey, last week we were doing file sync. Why are we suddenly on screen sharing?” It just follows orders. Direction changes, AI follows silently.

A developer would say “Hold on, shouldn’t we finish this before moving on?” I’m not a developer. Can’t read the code, so I don’t even know what was built before. AI doesn’t remember and I don’t know.

So every time, it built from scratch. The same thing. A different way. Five times.


After realizing this, a sentence I’d read somewhere came back to me.

“Implementation is free.”

In the AI era, writing code is free. Making files. Free. Claude does it. Writing functions. Free. Claude does it. Creating tests. Free. Claude does it. Fixing bugs. Free. Claude does it.

It’s true. I didn’t write a single line of code. AI wrote all of it.

Implementation really is free. But the problem wasn’t implementation.

“Deciding what to build.” That’s not free. AI can’t do that. Only humans can.

And I skipped that decision. “If I just start building, something will come out.” “I’ll figure out the direction once something works.”

Something came out. Ten thousand lines of spaghetti.

Skip the decision and AI makes its own decision every time. But AI’s decisions are different every time. Different context. Different conversation. Different moment.

Each one is rational on its own. All together? Insanity.

Imagine a robot with 28 left arms. Each arm is well-built. But there’s no right arm. The legs walk backwards. That’s what “rational decisions” without direction look like.


Karpathy coined “vibe coding.”

Type in English, get results. Build programs without knowing code. That’s vibe coding.

True. But there’s a catch.

It works for developers. For people who can’t code, it works differently.

Why?

Developers can read code. They see what AI built. They can tell “this is a minor fix” from “this changes the whole direction.”

I can’t tell the difference. Can’t read code.

AI says “I changed this like so.” I say “cool.” Whether that was a minor tweak or an architectural overhaul? No idea. “It runs? Cool.”

Developers are locals with built-in GPS. A Seoul local going from Euljiro to Jongno — need a map? No. They know the roads. The dead ends. The shortcuts.

I’m a tourist dropped in a foreign country with no map. AI says “this way” and I follow. But I can’t tell if that’s a right turn or a complete reversal.

“What if I just write better prompts?” That’s what I thought at first.

Made prompts more specific. Named variables. Set formats. Better? A little. But the root problem didn’t change.

No matter how good the prompts are, they’re useless without direction. A lost passenger giving the taxi driver detailed instructions — “AC on high, windows closed, FM radio” — but never saying where to go. The taxi just keeps driving in circles.

Prompts weren’t the problem. There was no destination.

What am I building? What does it do? Who uses it? When is it done?

I had none of these four. Didn’t matter how good my prompts were. Didn’t matter which AI I used.

No destination. The taxi meter just kept ticking.

Developers call that destination a “spec.”

Spec. I didn’t even know what that word meant back then.


Share this post on:

Related Posts


Previous
Frustration Is the Spec
Next
Everything Broke Monday Morning