The Slop Machine Needs a Pilot

Gin ·
ai ramble

I like tools that make computers suffer instead of me.

They let me move faster, chew through boring work, explore unfamiliar codebases, and ask stupidly large questions without spending half a day spelunking through docs, source files, and whatever cursed abstraction someone shipped in 2019.

But there’s a catch, because of course there is. Computers are involved.

Programming with AI is closer to programming itself than people want to admit. Same core skill, different interface.

Most of the discourse falls into one of three buckets:

Reality is more nuanced and boring.

AI agents are tools. Powerful tools, but still tools. The quality of what you get depends heavily on how you use them, what context you provide, what constraints you enforce, and whether you can tell when the machine is confidently shooting itself in the foot while yelling “You’re absolutely right!”.

That last part matters most.

Because without a good operator, AI will produce shit code riddled with bad decisions. Faster than before, too. Truly inspiring progress.

Mario Zechner wrote a great post about this: slow the fuck down, understand your agent, review its output, and stop letting tiny mistakes compound into structural damage.

That’s the part people keep skipping. They treat the agent like an oracle, then act shocked when it confidently shoots itself in the foot while shouting “You’re absolutely right!”.

Ok, and?

The hot take is this: working well with AI is a bottomless skill, just like programming.

You are not just “prompting.” You are decomposing problems, supplying context, choosing tradeoffs, reviewing output, correcting course, and deciding what “good” means before the code exists.

That’s very fucking similar to what we were doing before AI, isn’t it?

We’ve heard the “programming is 80% thinking and 20% coding” argument a thousand times over. Unless you’re a 10x programmer, a.k.a. a human-powered slop machine, you probably agree with the general shape of it.

AI shifts the craft from writing every line yourself to steering, validating, and integrating work produced by something else.

That does not remove the engineering skill.

It moves where the skill is applied.

To steer an agent effectively, you still need to know:

That last one is where beginners get eaten alive.

If you’re just starting with programming, AI feels magical. You can prompt Claude all day long and end up with something usable. But if you don’t understand what the code is doing, what good decisions look like, what classes of mistakes your agent tends to make, and why those mistakes are mistakes in the first place, you’re digging yourself a nice cozy hole.

The machine can give you output.

It cannot give you taste.

This is also why experience transfers even when the domain changes.

You may not know the language, the framework, or the subsystem yet, but you still know how to debug. You know how to distrust clean explanations, chase observable behavior, compare claims against evidence, and notice when a fix smells like it only works because the model got lucky.

That’s the part beginners don’t have yet.

There are levels to steering

AI lowers the floor.

You can go from idea to something-that-runs much faster than before. You get unstuck faster. You explore more. You build things you probably wouldn’t even attempt a few years ago.

Cool.

But the bottleneck didn’t disappear. It moved.

Once implementation gets cheap, the hard parts become framing the problem, giving the agent the right context, validating what it spits out, catching subtle failures, and knowing what actually matters versus what merely looks right.

That means the skill gap doesn’t go away. It changes shape.

A beginner with AI can build more than before.

An experienced engineer with AI can move at a pace that would’ve been borderline stupid a few years ago.

AI doesn’t flatten engineering. It amplifies it.

The progression looks a lot like programming itself.

At first, you’re happy if the thing runs.

Then you start caring whether it’s maintainable.

Then you start caring whether it should exist at all.

AI work has the same ladder.

A beginner asks:

That produces motion. Sometimes useful motion. Sometimes the kind of motion where a Roomba finds dog shit and becomes a Jackson Pollock machine.

A more experienced operator asks:

A senior operator goes one level higher:

That’s the part people miss when they reduce AI work to “prompting.”

The skill is not writing magic words into the rectangle.

The skill is knowing what the rectangle needs to be told, what it should not be allowed to touch, and when its answer is technically correct but strategically stupid.

Some AI skills will rot

There is an uncomfortable question here.

Is “being good at steering AI” actually a long-term skill, or are we just adapting to current limitations?

The honest answer is: both.

Right now, models need babysitting. They are powerful, but they are not great at self-correction. So you end up supplying:

That makes operator skill matter a lot.

But if models get better at running their own loops, validating themselves, exploring branches, and converging on solutions, some of today’s agent-wrangling rituals will become obsolete.

A lot of the current dance is just compensating for the fact that the tools are still dumb in specific ways.

And as the tools improve, the engineering work moves upward. Less syntax, more boundaries. Less “write this function,” more “what is this system allowed to optimize for, and where must it stay understandable?”

Because even if the AI gets better at implementation, someone still has to answer:

That’s still engineering.

Today, steering looks like:

Later, it probably looks more like:

Same job. Higher layer.

The hard part was never just writing code

There’s a nice fantasy where AI eventually handles everything.

Define the problem, validate it, implement it, done in one shot.

That only works if the definition of “correct” is clean. We can’t get that right in a fucking Jira ticket, so we’re definitely not getting it right just because the model got smarter.

Real systems are:

Implementation is often the easy part.

Deciding what “correct” means is where things get messy.

That doesn’t go away just because the machine can write code faster than you.

It gets worse if people mistake faster implementation for better understanding.

The slop loop

There’s a grim failure mode here.

Once a project reaches “AI critical mass”, with enough bad decisions compounded into its conventions, no AI will be able to save you.

The agent reads the existing codebase.

The existing codebase is full of bad patterns.

The agent follows those patterns because that is what “consistent with the project” means.

Now the bad patterns reproduce.

The next agent sees even more examples of the same bad decisions and treats them as stronger convention.

Congratulations. You’ve invented a compost heap with autocomplete.

This is the slop loop.

And it is one of the reasons “AI-generated code is bad” is the wrong diagnosis.

The issue is not just that AI can produce bad code.

The issue is that AI can normalize bad code faster than humans can feel the pain of it.

AI weakens the natural penalty for messy systems

There used to be a built-in punishment for writing garbage systems.

You’d open the code later and suffer.

Bad abstractions, no docs, weird behavior. You paid for your sins in time and frustration.

LLMs change that.

They can brute-force their way through nonsense better than humans can.

I was struggling with some stupid bugs in a third-party metrics platform and asked the model to figure it out. The model started reading minified JS straight from the page, parsed it, figured out the actual behavior, and patched the integration path.

That’s a threshold few humans will cross.

Most of us will just open a support ticket, but AI will Sherlock the fuck out of it.

That changes incentives.

It doesn’t remove the cost.

It defers it.

If the machine can deal with garbage, people tolerate more garbage. The perceived cost of doing things badly drops, so people just… do things badly.

“AI will figure it out later.”

This is why vibe-coding isn’t just funny internet debris. It’s an early warning sign.

Even with weaker models, people were already doing “build me this, make no mistakes” and shipping broken SaaS full of bugs and security holes.

The issue was that people have zero self-control.

Give humans something that kinda works and they’ll happily skip understanding everything underneath it. We’ve been doing this with every abstraction we’ve ever built. AI just makes the abstraction wider and the skipping cheaper.

As models improve, that temptation gets worse.

If the system can recover from missing docs, weird behavior, and straight-up garbage code, people start thinking they don’t need to understand anything anymore.

That’s the actual danger.

Not weak AI.

Strong enough AI that makes your booboos stop hurting.

The future skill is deciding how much opacity you tolerate

This is where the engineering skill keeps moving upward.

If AI keeps getting better, a lot of decisions we care about today may matter less at the human layer.

Database choice, language, component boundaries, internal protocols… maybe the AI handles more of that and builds something better than the standard stack anyway.

Sounds efficient.

Also sounds like a great way to build something locally optimal and globally incomprehensible: a system that works beautifully right up until it doesn’t, at which point nobody can explain it without replaying the entire machine-shaped reasoning chain that created it.

So the question shifts.

Not just:

But:

That’s not separate from the original point.

That is the original point, followed to its ugliest conclusion.

If programming with AI is still engineering, then the most important engineering decisions move away from syntax and toward boundaries, constraints, evaluation, and legibility.

The pilot does not disappear just because the plane can fly itself.

The pilot becomes the person deciding where autopilot is allowed to be trusted, when it must be overridden, and whether anyone still knows how to land the damn thing when the screen goes dark.

AI gives you leverage. Real leverage.

AI lets you move faster, explore wider, and offload work that used to eat whole afternoons for no good reason.

That’s good.

I like tools that make computers suffer instead of me.

But leverage is not understanding.

If you let the machine absorb every consequence, every rough edge, every confusing abstraction, and every little mistake before it hurts, you don’t get a cleaner system. You get a system whose pain has been deferred.

And deferred pain compounds.

So use the slop machine. Steer it hard and make it useful. But keep the system legible enough that when the machine stops being helpful, a human can still walk in, understand what broke, and fix it without needing to summon the same god that created the mess.

Make your booboos hurt early.


Disclaimer: AI was used while writing this post, because of course it was. I outsourced some phrasing, not the thinking. The booboos are still mine.