CoachCoding
How It WorksServicesBlogAboutGet Started
Back to blog
July 15, 2025

18,000 Users Exposed: What the Lovable Breach Teaches About Vibe Coding Security

vibe-codingsecuritycoach-codingproduction-failures

A featured app on Lovable's platform exposed 18,697 user records, including data from UC Berkeley, UC Davis, and K-12 students. Minors were in the dataset. The app had 16 vulnerabilities. Six were critical.

The root cause was inverted access control logic. The AI-generated code blocked authenticated users while allowing anonymous access. An unauthenticated attacker could read every user record, delete accounts, grade submissions, and send bulk emails.

Weeks later, Moltbook, an AI-powered social network, leaked 1.5 million API keys, 35,000 email addresses, and private agent messages. The failure was missing Row Level Security on a Supabase database. A public API key became an admin-level backdoor. Anyone with a browser could dump the entire database.

These are not isolated incidents. They are the pattern.

The pattern repeats across platforms

Autonoma documented seven vibe-coded apps that broke in production during 2025 and early 2026:

  • Base44: Auth bypass that affected every app on the platform
  • Orchids: Zero-click remote code execution giving attackers full machine access
  • Replit: An AI agent wiped 1,206 executive records despite instructions not to
  • Enrichlead: Client-side authentication that let users bypass subscriptions and abuse the API

Every failure shares the same root cause. AI generates code that runs. It does not generate code that is secure.

Why AI skips security

AI coding tools are trained to produce functional output. "It works" is the success condition. Security is not part of that condition.

Research shows AI-generated code has 2.7x more security vulnerabilities than human-written code. It has 1.7x more total issues per pull request. The most dangerous category is code that runs without errors but contains exploitable flaws. These "silent failures" pass basic testing because AI-generated tests are written to confirm the happy path, not probe the edges.

Gartner estimates $1.5 trillion in technical debt from AI-generated code by 2027. Security debt is the most expensive subset of that number, because the cost is not refactoring. It is breach response, legal liability, and lost trust.

What a security review actually catches

The Lovable breach would not have survived a 15-minute review. Inverted access control logic is visible in the code. Missing Row Level Security is visible in the database schema. Hardcoded API keys are visible in a secrets scan.

A coach reviews these layers before deploy:

  • Access control: Who can read, write, and delete each resource? Does the code enforce it, or does it assume the frontend will?
  • Row Level Security: Are database policies in place, or is the public API key a master key?
  • Secrets management: Are API keys, JWT secrets, and credentials in environment variables, or hardcoded in the source?
  • Input validation: Does the code trust user input, or does it validate and sanitize?
  • Error handling: Do error messages leak internal state, stack traces, or database schemas?

None of this is exotic. It is the security baseline that every production app needs and that AI consistently skips.

The real cost of skipping review

The CyberIndemnity analysis of the Moltbook breach estimated the incident cost at multiples of what the entire development budget had been. Breach notification, credential rotation for 1.5 million keys, legal exposure, and platform reputation damage.

A security review before launch costs hours. A breach after launch costs months.

Where coach coding fits

Coach coding does not mean abandoning AI tools. It means adding a review layer between "it works" and "it ships."

The coach defines the security architecture before the AI writes its first line. PHI boundaries, access control patterns, encryption requirements, secrets management. The AI builds features inside those guardrails. The coach reviews each deploy for the categories of failure that AI systematically misses.

You keep the speed. You lose the breaches.

If you are shipping an AI-built app and nobody has reviewed the security layer, book a free call. 30 minutes. We will look at what is exposed and what to fix first.

CoachCoding

Expert-guided AI development. Ship real software, not just prototypes.

How It WorksServicesBlogAboutJonyGPT

© 2026 CoachCoding. A JonyGPT service.