A caveated & overdue defence of Vibe Coding


Vibe coding, or LLM (Large Language Model) driven development if you’re feeling fancy, is the act of developing software via the extensive use of AI tooling, whether it be GitHub Copilot, ChatGPT, or any of the other offerings. It mostly involves prompting these tools to generate large amounts of, or even all of, the code used to make a website, app, or in some cases, a whole startup. Now as someone born in Gen Z who acts millennial and became a Linux boomer, I reckon if I tried to vibe I’d probably break, and the term itself causes an automatic grimace from me every time I think of it. Compounded with my reputation amongst colleagues to be the resident AI sceptic, it might be expected for me to pile on hating it. This isn’t the case though, viewed from the right angle, and through the right lens, vibe coding could be a net positive for tech.

Lowering the barrier

The biggest win of vibe coding, in my view, is the democratisation of software creation. The barrier to entry is now barely more than an internet connection (and with tools like LM Studio, even that’s negotiable). This opens the door for a whole new wave of would-be developers, whether they just want to build a small tool for themselves, or break into the industry. People who might have bounced off the steep learning curve of syntax, boilerplate, and config hell now get to engage with code on more creative, expressive terms.

That’s not nothing. If AI helps someone turn an idea into a working prototype, or solve a problem in their life that they couldn’t before, that’s a win. Full stop. It expands the pool of creators, especially for folks whose strengths lie more in ideation, design thinking, or domain knowledge than memorising language quirks. Yes, there are risks (we’ll get to those), but accessibility in tech is an emphatic, objective good.

It’s not just for beginners, either. Even for experienced developers, vibe coding has value, especially for throwaway tasks and short-lived scripts. Personally, if I need a quick bash script, a one-off data transform, or a mocked-out API, I’ll just boot up ChatGPT, iterate a couple of times, and move on. I don’t need it to be elegant or scale-able, I just need it to work now. That kind of speed unlocks a new level of productivity. Automating things that were previously too small to justify the effort, those weird glue tasks we all ignore, can save serious time and mental energy. (Relevant XKCD below.)

And that saved cognitive load? It means more headspace for the real problems, the ones worth thinking deeply about.

An XKCD comic about if its worth automating a task based on how much time it costs

Now about those caveats…

A gif of the IQ test from the Mike Judge film Idiocracy

Unfortunately its now time to look at the other angles than that right one, the main caveat it brings in my mind is the risk it brings, well more accurately, the multiple forms of risk it brings.

The big one is the potential loss of understanding of the code being run, this could mean it has some serious vulnerability in it, be broken in some subtle way, and not to mention the difficulty debugging AI generated code could bring. This is due to the inherent problems with LLM’s generating code I brought up in a previous blog post, apart from generating what is statistically the closest to something that looks like code that achieves the request. Current-gen LLM’s lack any sort of deep cognition to properly problem solve, to avoid using deprecated libraries or vulnerabilities, and in the use-case where its importing libraries, they’ve often been seen importing libraries that aren’t needed, and using versions that are scarily out of date.

One of the more subtle but serious risks is the gradual erosion of deep technical understanding. Over-reliance on AI tools could create a generation of developers who never learn what’s happening under the hood, who can build with speed, but not debug with depth. Any seasoned engineer can recall a bug that took hours (or days) to unravel, only to discover the culprit was something obscure: a rogue encoding mismatch, an edge-case compiler difference, or a threading issue buried three abstractions deep. That kind of troubleshooting requires more than copy-pasting prompts, it demands intuition, patience, and a working mental model of the stack. If those skills fade into specialisation, or worse, disappear entirely, we may end up with systems that are fast to build but fragile to maintain, and teams that ship quickly but can’t recover when things go sideways.

AI-generated code tends to be whatever the model feels like in the moment. Variable names jump between styles. Folder structures are inconsistent. Functions balloon in size or get split arbitrarily. That is unless you “trauma-dump” a bunch of code style context into every single prompt. And even then… good luck getting consistency across generations. Without conventions or patterns, code becomes harder to maintain, test, and scale. It’s not that AI can’t generate “good” code, it’s that it has no sense of what good code actually means in context. It’s parroting patterns, not enforcing discipline. That’s fine for one-offs. It’s a disaster for long-term projects.

That being said…

The thing is, most of these problems aren’t new. We’ve already built systems to deal with them and they work. Take bugs, security vulnerabilities, and inconsistent code style: these are exactly the kinds of issues modern CI/CD pipelines are designed to catch. Quality gates exist specifically to prevent a rogue “10x developer” (or AI assistant) from pushing a 1k untested, potentially dangerous lines of code into production. Static analysis tools like JetBrains Qodana and SonarQube, linting tools like eslint, dependency scanning, unit test thresholds, even mutation testing, if you’re serious, are all well-established tools. If used properly, they don’t care whether the code came from an AI, a junior, or a lead engineer. Bad code gets stopped at the gate. The challenge isn’t the tooling, it’s making sure we apply it just as rigorously to AI-generated code as we would to anything else.

And as for the loss of deep understanding, that’s not a new fear either. We’ve seen it before in the Stack Overflow era, where developers copied and pasted without context. The solution wasn’t to ban Stack Overflow, it was to teach better habits. Code review practices adapted. Mentors asked “do you understand why this works?” Teams started enforcing documentation, unit testing, and design walkthroughs to make sure understanding was part of the process. The same applies here. If we treat AI output like gospel, we’re in trouble. But if we treat it like a tool, one that still demands human judgement, reflection, and accountability, we’re fine. We can build processes around pairing, prompt transparency, explain-your-code practices, and deeper on-boarding. We can teach developers not just how to use AI, but how to interrogate it. The danger isn’t the tool, it’s pretending the tool can think.

These issues are also assuming these vibe coders are all going to enter the industry professionally, and do a maliciously bad job of considering issues like security, testing, and maintainability. This is a possibility for sure, but it’s also a possibility for developers who don’t lean heavily on AI, regardless of tools used, some will always do the bare minimum, and some will try their best.

And for the gatekeeper elephant in the room

Another XKCD comic, this one is parodying the "real programmers" trope

Some of the loudest discourse around vibe coding seems fixated on defining what makes someone a “real” developer, and honestly, that feels like missing the forest for the trees. Yes, vibe coding comes with real issues (we’ve just gone over them), but when it comes to who “counts” as a developer? Who cares? If someone, thanks to the accessibility that AI provides, can finally build something, express an idea, solve a problem, chase an opportunity they otherwise couldn’t, that’s a net gain. That’s the bigger picture. Gatekeeping based on how many abstractions deep someone can go misses the actual point of software: to empower people to create, communicate, and solve things. If AI helps more people do that, that’s something worth embracing, flaws and all.

In short: vibe coding might not make you a ““real”” developer, but if you ship the thing, solve the problem, or open the door for someone else? That sure sounds real enough to me.

#AI#Ramblings