Blog
Notes on engineering, design, and what I'm learning while building.
Notes on engineering, design, and what I'm learning while building.
How the rush to 'vibe code' with AI is introducing massive technical debt, security vulnerabilities, and maintenance nightmares—and why AI pair programming is the smarter path forward.
Vibe coding promises speed, but often delivers bloat, vulnerabilities, and debt.
It started with a frustrated phone call from my friend—let's call him Dr. Miguel. Miguel is brilliant: PhD in psychology, published researcher, the kind of person who can dissect complex human behavior with surgical precision. He'd caught the vibe coding bug and decided to build an app where psychologists could write, exchange ideas, and debate. "How hard could it be?" he thought. "Just tell the AI what I want, and boom—instant app!"
For weeks, everything seemed magical. Miguel would describe features in plain English, and his AI coding assistant would dutifully generate components, pages, and functionality. When I suggested he switch from template-based rendering to MDX for easier essay writing, the AI handled the conversion seamlessly. Miguel was living the vibe coding dream.
Then came the call: "My app won't deploy anymore. Vercel says the build is failing, but everything works perfectly on my end. Can you take a look?"
What I found when I arrived at his house was a master class in how AI can turn a simple application into an unmaintainable nightmare. After digging through the code, I discovered mysterious imports scattered throughout the codebase—modules that weren't used anywhere, random utility functions that served no purpose, and dependencies that existed solely to support other unnecessary dependencies.
"What are these for?" I asked, pointing to a particularly bewildering import statement.
Miguel stared at the screen. "I have no idea. The AI added those."
But here's where it gets truly absurd: When Miguel had reported the deployment issues to the AI, instead of identifying and removing the unnecessary code causing the build failures, the AI had gone into full hacky-patch mode. It added dozens of lines of workaround code—environment checks, conditional imports, build-time hacks—all designed to mask the symptoms in development while completely ignoring the root cause.
The MDX conversion? The AI had deleted some of the old template-based pages but left others, creating a frankenstein codebase with two different rendering systems running side by side. The repository was littered with dead code, redundant functions, and patches upon patches upon patches.
As I spent the next few hours cleaning up the mess—removing unused imports, deleting redundant code, and fixing the actual problems—I had an unsettling realization. Miguel, brilliant as he is, had no understanding of what was running his application. He was completely dependent on an AI that had no understanding of good software architecture, no concept of technical debt, and no ability to distinguish between fixing problems and hiding them.
That night, I couldn't shake the feeling that we were witnessing something much bigger than one confused deployment. We were seeing the birth of a new kind of software crisis—one where smart, capable people are unknowingly creating digital disasters because they're trusting AI to make engineering decisions it's fundamentally unqualified to make.
This is the story of vibe coding: the seductive promise of instant software development that's actually creating a generation of unmaintainable, insecure, and incomprehensible applications.
The software development world is experiencing a seismic shift. Computer scientist Andrej Karpathy, a co-founder of OpenAI and former AI leader at Tesla, introduced the term vibe coding in February 2025. The concept refers to a coding approach that relies on LLMs, allowing programmers to generate working code by providing natural language descriptions rather than manually writing it. The promise is intoxicating: describe what you want in plain English, let AI generate the code, and ship faster than ever before.
But beneath the glossy demos and viral tweets lies a growing crisis.
Vibe coding describes a chatbot-based approach to creating software where the developer describes a project or task to a large language model (LLM), which generates code based on the prompt. The developer does not review or edit the code, but solely uses tools and execution results to evaluate it and asks the LLM for improvements. Unlike traditional AI-assisted coding or pair programming, the human developer avoids examination of the code, accepts AI-suggested completions without human review, and focuses more on iterative experimentation than code correctness or structure.
In essence, vibe coding represents a hands-off approach where developers "fully give in to the vibes" and trust AI to handle all the coding details. The human becomes more of a product manager, describing desired outcomes while the AI becomes the sole architect and implementer.
The security implications of vibe coding are nothing short of catastrophic. Recent research paints a terrifying picture of just how dangerous this approach has become.
Veracode's 2025 GenAI Code Security Report tested over 100 large language models across Java, Python, C#, and JavaScript. The goal? To see if today's most advanced AI systems can write secure code. Unfortunately, the state of AI-generated code security in 2025 is awful. The research found that 45% of code samples failed security tests and introduced OWASP Top 10 security vulnerabilities into the code.
But it gets worse. Java was the riskiest language, with a 72% security failure rate across tasks. Think about that for a moment—nearly three out of four Java applications generated through vibe coding contain serious security vulnerabilities.
The most concerning finding? Cross-Site Scripting (CWE-80): AI tools failed to defend against it in 86% of relevant code samples. These aren't obscure edge-case vulnerabilities—these are basic security failures that any competent developer should catch.
At least 18 popular JavaScript code packages that are collectively downloaded more than two billion times each week were briefly compromised with malicious software after a developer was successfully phished. The attack targeted packages including chalk
, debug
, strip-ansi
, and ansi-styles
. The malicious payload acted as a cryptocurrency wallet drainer by injecting itself into the browser environment and hooking into network requests and common wallet APIs.
What makes this attack particularly relevant to vibe coding is how rapidly it spread through AI-generated applications. As one security expert observed: "For about the past 2 years orgs have been buying AI vibe coding tools, where some exec screams 'make online shop' into a computer and 389 libraries are added and an app is farted out. The output = if you want to own the world's companies, just phish one guy in Skegness."
This encapsulates the vibe coding problem: AI tools automatically include massive dependency trees without human oversight, creating a perfect storm for supply chain attacks to propagate rapidly across thousands of applications.
Consider the Tea app case study, where a women's safety and dating app suffered a devastating security breach that exposed approximately 72,000 images, including 13,000 government ID photos from user verification and 59,000 publicly viewable images from posts and messages. The shocking truth? Nobody actually "hacked" the app—its Firebase storage was left completely open with default settings. Investigators noted: "They literally did not apply any authorization policies onto their Firebase instance."
Another example comes from Databricks' security research, where they used vibe coding to create what appeared to be a functional Snake game. Although it worked, the network layer transmitted Python objects via pickle
, exposing the system to arbitrary remote code execution (RCE).
While security vulnerabilities grab headlines, the technical debt crisis created by vibe coding may be even more insidious.
GitClear's latest report exposes rising code duplication and declining quality as AI coding tools gain in popularity:
AI-generated code often ignores fundamental engineering principles, turning simple tasks into bloated implementations. Over time, this makes codebases harder to read, modify, and maintain—driving up costs and multiplying bugs.
Vibe coding creates code that nobody truly understands—what experts call "knowledge debt." The app works, but no one can safely modify or extend it.
Even simple prompts can generate expansive dependency trees. Research shows that 5%+ of code from commercial models and ~22% from open source models referenced packages that do not exist, enabling "slopsquatting" where attackers publish malicious packages matching hallucinated names.
The solution isn't to abandon AI tools—it’s to use them responsibly, with humans in the loop.
Tools like GitHub Copilot enhance developer capabilities while preserving judgment. Treat suggestions as drafts—comment to guide, then polish for merge.
Treat AI as a powerful assistant—not a replacement. Invest in education, reviews, and integrated security tooling. Set standards for AI-assisted development.
Vibe coding promises velocity but often ships fragility. AI pair programming delivers leverage with accountability. The future is bright—if we keep humans firmly in control.
2025 GenAI Code Security Report
Veracode's comprehensive analysis testing over 100 large language models, revealing that 45% of AI-generated code samples failed security tests and introduced OWASP Top 10 vulnerabilities.
Cybersecurity Risks of AI-Generated Code
Georgetown's Center for Security and Emerging Technology examination of the systemic security vulnerabilities introduced by AI coding tools across different programming languages.
The Most Common Security Vulnerabilities in AI-Generated Code
Endor Labs analysis identifying the most frequent security flaws in AI-generated code, including Cross-Site Scripting failures in 86% of relevant samples.
18 Popular Code Packages Hacked, Rigged to Steal Crypto
Brian Krebs' investigation into the massive npm supply chain attack that compromised packages with over 2 billion weekly downloads, demonstrating how vibe coding amplifies attack vectors.
Widespread npm Supply Chain Attack Puts Billions of Weekly Downloads at Risk
Palo Alto Networks Unit 42 analysis of how cryptocurrency wallet drainers spread through popular JavaScript packages, particularly affecting AI-generated applications.
20 Popular npm Packages Compromised in Supply Chain Attack
The Hacker News coverage of the supply chain attack targeting packages like chalk, debug, and strip-ansi, showing how vibe coding accelerates malicious payload distribution.
Wallet-Draining npm Package Impersonates Nodemailer
Socket's analysis of typosquatting attacks that specifically target AI-generated code through dependency confusion and package name hallucinations.
Crypto wallets targeted in widespread hack of npm, GitHub
ReversingLabs investigation into how attackers exploit the massive dependency trees commonly generated by AI coding tools.
How AI-Generated Code Accelerates Technical Debt
LeadDev analysis of how AI tools create maintainability challenges through code duplication, bloated implementations, and architectural inconsistencies.
How AI-Generated Code is messing with your Technical Debt
Kodus examination of the long-term costs of AI-generated code, including the 8× increase in duplicated code blocks and declining refactoring rates.
Building AI-Resistant Technical Debt
O'Reilly Radar guide to managing the technical debt crisis created by AI coding tools, with strategies for maintaining code quality and architectural integrity.
Passing the Security Vibe Check: The Dangers of Vibe Coding
Databricks security research demonstrating real-world vulnerabilities in vibe-coded applications, including the Snake game case study with remote code execution flaws.
The trouble with vibe coding: Why AI-driven software needs discipline
Equal Experts analysis of why the hands-off approach to AI coding fails in production environments and the importance of human oversight.
Can AI really code? Study maps the roadblocks to autonomous software engineering
MIT CSAIL research examining the fundamental limitations of AI coding tools and why autonomous software development remains problematic.
When the Vibes Are Off: The Security Risks of AI-Generated Code
Lawfare analysis of the legal and regulatory implications of security vulnerabilities in AI-generated code, particularly in critical infrastructure.
Beyond the Vibe: A Deep Dive into the Dangers of Vibe Coding, Lessons from the Tea App Incident
Adnan Masood's detailed analysis of the Tea app security breach, where vibe coding led to completely open Firebase storage exposing 72,000 user images.
When AI nukes your database: The dark side of vibe coding
CSO Online investigation into catastrophic failures caused by AI-generated database operations and the importance of human validation.
Getting a Cybersecurity Vibe Check on Vibe Coding
Dark Reading examination of real-world security incidents caused by unreviewed AI-generated code and the need for security-first development practices.
Top Vibe-Coding Security Risks
Netlas comprehensive analysis of the most dangerous security vulnerabilities introduced by vibe coding, with practical examples and mitigation strategies.
Developers with AI assistants need to follow the pair programming model
Stack Overflow research demonstrating how collaborative AI usage with human oversight produces better outcomes than autonomous AI coding.
Will Coding AI Tools Ever Reach Full Autonomy?
IEEE Spectrum analysis of the technical and practical limitations preventing fully autonomous AI coding tools from replacing human developers.
GitHub Copilot Documentation
Official documentation for GitHub Copilot, showcasing best practices for AI-assisted development with human oversight and review.
AI Pair Programming in 2025: The Good, Bad, and Ugly
Builder.io comprehensive guide to effective AI pair programming practices, contrasting collaborative approaches with autonomous vibe coding.
Report finds AI-generated code poses security risks
eeNews Europe coverage of industry reports showing widespread security vulnerabilities in AI-generated code across multiple programming languages.
AI can write your code, but nearly half of it may be insecure
Help Net Security analysis of the 45% security failure rate in AI-generated code and the implications for enterprise software development.
AI code suggestions sabotage software supply chain
The Register investigation into how AI tools introduce supply chain vulnerabilities through hallucinated package names and unnecessary dependencies.