9 Challenges of Vibe Coding (and How Coach Coding Fixes Every One)
We tracked every commit on a real production app built primarily with AI tools. 351 commits over about two months. The velocity was impressive: 153 feature commits, shipping roughly one every 8 hours.
But 97 of those 351 commits were fixes. Nearly 1 fix for every 1.6 features. The AI ships fast. It also breaks fast.
Here are the 9 patterns we keep seeing, with real examples from the commit history, and how coach coding prevents each one.
1. The fix treadmill
The problem: 28% of all commits are fixes. Features routinely spawn follow-up fix commits. One feature (buddy streaks) needed at least 3 separate fix commits. The dashboard footer was added and immediately needed a React hooks violation fix in the next commit. Exercise stats required fixes across individual, athlete, and coach views.
This is the default rhythm of vibe coding: build, break, fix, fix the fix.
How a coach fixes it: A coach reviews your feature before it ships. Not after. The architecture review catches the structural problems that create cascading bugs. You still move fast, but the fix treadmill slows to a walk because the code is built on solid patterns from the start.
2. React and Next.js footguns the AI keeps hitting
The problem: The AI repeatedly introduces framework-specific bugs that an experienced React developer would catch immediately:
- 3 hydration mismatches (SSR/client rendering divergence)
- 2 hooks violations (hooks called inside conditionals or during render)
- Login screen flicker from client-side state timing issues
- Null role users from missing defensive coding for edge cases
These are not exotic bugs. They are well-documented pitfalls that the AI hits over and over because it does not accumulate project-level context between sessions.
How a coach fixes it: A coach knows these patterns by heart. Hydration mismatches, hooks rules, conditional rendering. These are caught during live code review before they reach a commit. The coach also teaches you to recognize them yourself, so the AI's output gets better prompts next time.
3. Platform incompatibilities
The problem: The AI wrote code using Neon database transactions that worked in dev but crashed on Vercel's serverless runtime. This required switching the entire DB driver from neon-http to neon-serverless. Code that works locally but breaks in production is one of the most expensive categories of bugs.
How a coach fixes it: A coach has deployed to Vercel, Railway, Fly, and AWS hundreds of times. They know which database drivers work in which runtimes. They flag the neon-http transaction issue before you write your first migration, not after your users hit a 500 error.
4. The em-dash problem (AI style tics)
The problem: The AI kept inserting em-dashes in generated copy, requiring 6 separate cleanup commits across blog posts, the playbook, calendar reminders, the feedback form, and the What's New page. The AI does not learn from prior corrections in the same project.
How a coach fixes it: A coach establishes a writing style rule file at the start of the project. The AI reads it on every prompt. One rule file, zero em-dash commits. This is the kind of project setup that takes 5 minutes and saves hours.
5. Relentless layout churn
The problem: The AI cannot nail visual layout on the first pass. There is a long tail of grid, column, scroll, and spacing fixes:
- Uniform grid layout for stat boxes had to be fixed manually
- Dashboard section columns needed reordering
- Admin roster scroll was fixed twice
- Card reader layout broke because of a ScrollReveal transform ancestor
Layout is iterative by nature, but the AI generates layout code that looks plausible but falls apart under real content and real viewport sizes.
How a coach fixes it: A coach reviews layout decisions at the component level, not the pixel level. They catch the ScrollReveal transform issue because they have seen it before. They set up responsive patterns that work across viewports the first time, and they know when to use CSS Grid vs. Flexbox vs. a simple stack.
6. Add-then-remove cycles
The problem: Features get added with enthusiasm, then pared back or removed entirely in subsequent commits:
- Count card added to the consultant dashboard, then removed one commit later
- Publish UI added, then removed
- Mock team-setup page created, then removed
- Postmark email service added, then removed
- An entire panel built, then replaced with a simple link
Each of these cycles is wasted work. The AI does not push back on scope or ask whether a feature is actually needed.
How a coach fixes it: A coach asks "do you actually need this?" before you build it. That one question, applied consistently, eliminates most add-then-remove cycles. The coach helps you scope features to what your users need right now, not what might be cool later.
7. Naming and wording whiplash
The problem: AI-generated labels frequently need human correction:
- "Issue" renamed to "Observation"
- "Your Journey" renamed to "Streaks"
- "Hearts" renamed to "Amazing"
- "Athletes/roster" changed to "individuals/team"
- Changelog entries rewritten and consolidated 3 times
The AI picks plausible-sounding names that do not match your product's voice or your users' vocabulary.
How a coach fixes it: A coach helps you define your naming conventions early. A glossary of product terms that the AI references on every prompt. "We call them individuals, not athletes. We call it an observation, not an issue." One document, consistent naming from the first commit.
8. Mega-commits that bundle too much
The problem: The AI often packs unrelated changes into single commits, making rollback dangerous:
- One commit addressed 6 feedback issues across dashboard sections, buddy streaks, exercise stats, and the admin panel
- Another switched the DB driver, added an agent-chat feature, AND applied a migration
- A third resolved 4 dashboard bugs and added a test harness in the same commit
When something breaks in a mega-commit, you cannot roll back the broken part without losing the working parts.
How a coach fixes it: A coach teaches you commit discipline from day one. One logical change per commit. The AI does not enforce this on its own, but a coach reviewing your work will flag it every time until it becomes habit. Clean commits make debugging, rollback, and code review dramatically easier.
9. Extreme velocity, extreme thrash
The problem: The busiest days had 30 to 32 commits, almost one every 30 minutes. This pace produces a pattern of "ship it, fix it, fix the fix, rename it, simplify it." Tight feature-fix-feature-fix loops with no breathing room for design.
How a coach fixes it: A coach is the breathing room. They slow you down just enough to make decisions instead of reactions. They help you plan 3 features ahead instead of building and reverting in real time. The velocity stays high, but the direction stays consistent.
The meta-lesson
Vibe coding excels at rapid prototyping and feature breadth. 153 feature commits in about 2 months is genuinely impressive velocity. But it trades that velocity for stability. Without a coach, the human ends up being a full-time QA engineer, copy editor, and architect, catching the same categories of mistakes over and over because the AI does not accumulate project wisdom between sessions.
Coach coding does not replace the AI. It gives the AI a better environment to work in. Style guides, naming conventions, architecture patterns, commit discipline, deployment knowledge. The coach builds the guardrails. The AI builds the features. The result is the same velocity with a fraction of the thrash.
If you are building with AI and recognize these patterns in your own project, book a free call. 30 minutes. We will look at where your project is and figure out what a coach would catch first.