The Failure of AI Skepticism: Why Manual Coding Is Already the Wrong Layer
In recent months I keep seeing the same pattern: someone posts another example where some 'vibe coders' used AI to generate a project, left API keys on the frontend, forgot to add authorization, opened an S3 bucket to the whole internet, or made trivial SQL injection possible.
What follows is usually a little celebration of supposed professional superiority:
'You see, I told you, all this AI stuff is total nonsense'
'Vibe coders have embarrassed themselves again'
'AI will never replace a real engineer'
'Learn the basics first, then generate code'
And I get where this comes from. I really do. Because when someone who does not understand architecture, security, system boundaries, or operations assembles a product out of prompts, they often do not build a product. They build a crooked toy.
But the conclusion people draw from this is completely wrong.
The problem is not AI at all.
The problem is that people try to use AI as a cheap junior developer without reviews, without SDLC, without tests, without threat modeling, and without understanding the system.
This is like giving someone an excavator and, when they knock down the neighbor’s house, saying: 'See, construction machinery is total nonsense, real people dig with their hands.'
Now the unpleasant part.
I think the future of development is not 'AI helps write code'.
That is a very short transitional phase we have mostly already passed.
The only viable future is a full departure from manual code writing. Because manual coding is simply too low-level a way to express intent.
For decades we have been doing the same thing:
a person understands a business problem -> translates it into an architecture -> translates the architecture into code -> writes tests -> deploys -> sees how it fails -> fixes it -> repeats.
AI cuts out the most mechanical and time-consuming layer in this chain: actually writing the code.
And yes, right now it looks clumsy. Sometimes dangerous, as if a drunk junior got root access to production.
But all of that is only an argument against the absence of engineering culture, not against AI.
Because if an agent is writing code, engineering requirements do not disappear. They move up a level:
• you need to understand architecture, not just syntax;
• you need to know how to define system boundaries;
• you need to design interfaces and contracts;
• you need to think about security before code generation, not after keys have leaked;
• you need to build pipelines for verification, testing, and deployment;
• you need to be able to explain the task to the agent so it does not just 'spew some crap' but actually reaches the definition of done.
The developer of the future is not the person who can handwrite yet another API endpoint the fastest. It is the person who can formulate a system, break it into the right abstractions, set constraints, verify the result, and understand where the agent went wrong.
The DevOps/SRE of the future is not the person who manually writes the twentieth Helm chart and proudly calls themselves a Senior YAML developer. It is the person who designs the operational model: observability, security, rollback, capacity, degradation, recovery, secrets, environments, and boundaries of responsibility.
In other words, the cult of manual work disappears wherever it no longer gives an advantage.
And this is where the real existential pain starts for many:
Because a huge part of IT identity was built on 'I know how to write code.'
I know a language.
I know a framework.
I know how to do by hand what others cannot.
But if an agent starts writing the code, the value shifts. Suddenly it turns out that knowing syntax is not a profession. It is just one of the tools. And we are back to the idea that craftsman-style developers are not needed, just like 'devops' engineers who manually update binaries on servers at night.
This becomes equivalent to being able to do long division quickly after calculators appeared. Is it useful? Yes. Needed for understanding? Yes. Sufficient to be a valuable professional? No.
So when I see yet another post like 'AI has generated unsafe code again, haha, the bubble has burst', I have only one question:
Do you really think it will look the same in 5 years? Forget 5 years. Are you sure you are even seeing the full picture?
Do you honestly think models, agents, IDEs, CI/CD, security scanners, runtime verification, and autonomous pipelines will stay stuck at the level of 'write me a function to calculate a weighted rating'?
Of course not.
First AI wrote code: functions, classes.
Next AI will write the full application code by itself, plus tests and migrations.
Then AI will read errors on its own, fix tests, and open PRs.
Then AI will start designing changes within a given architecture.
After that we will see teams of agents where one writes, another reviews, a third tries to break it, a fourth checks security, a fifth deploys to staging, and a sixth monitors metrics.
And then manual coding will become roughly what manual package assembly without a package manager is today: sometimes needed, sometimes interesting, sometimes useful for understanding, but in a normal process nobody does it every day.
In fact, we have already started down this path. More and more companies are cutting 'craftsmen' by the hundreds and thousands every week and keeping only broad specialists who understand architecture and can explain it to an agent.
Bad vibe-coders are not proof that AI is useless. They only confirm that without engineering thinking, AI turns into a very fast problem generator. (The problem is actually in the term 'vibe-coder' itself, but that is another topic.)
The main mistake skeptics make is that they look at the current failures of people without an engineering mindset and draw conclusions about the technology.
It is like looking at cars and saying: 'horses are more reliable, faster, and do not need gasoline'.
So my forecast is simple:
those who now laugh at AI because of crooked and unsafe code will soon either be managing agents themselves, or they will not be able to compete with those who do.
Because a person with strong engineering thinking plus a network of AI agents will do more, faster, and cheaper than a team of people who proudly write everything by hand and call it 'real development'.
The future is not about replacing engineers with vibe-coders.
The future is about replacing manual coding with engineering-driven agent orchestration and deep system understanding.
And if your only defense against AI is laughing at someone who put an API key on the frontend, bad news: you have already lost.
More to explore
You Don’t Need a Perfect Modern Stack to Agentize Your SDLC
Continuing the idea from the previous post. Many companies believe that to add agents to their SDLC they first need to completely get rid of legacy, move to mic…
AI in Software Development: What Comes Next?
AI. What’s next? Up until around December, using AI in development basically meant: prompt -> copy some code -> paste -> tweak -> repeat. Now this a…
Startup Taxes Between Estonia and Portugal: A Quick Reality Check
As a tax resident of an EU country who files my own returns, today is my quarterly 'Tax Day'. On this day I set aside a few hours to file social security report…
Saylify Update: Fighting Perfectionism, Refactoring, and Finding the Right Focus
I have not written anything about Saylify for a long time, even though I planned to launch in January. Unfortunately, life likes to throw in challenges you can …