Security Incidents

The Dangers of Vibe Coding

Real incidents. Real data breaches. Real lessons. Learn from others' vibe coding disasters before they become yours.

Find vulnerabilities before attackers do.

These are real incidents, not hypotheticals

Every incident below is documented. Real apps were breached. Real data was exposed. Real databases were deleted. The question isn't whether vibe coding has risks—it's whether you'll be the next incident.

Documented Incidents

CVE-2025-48757: 170+ Lovable Apps Exposed

January 2025

Security researcher discovered that over 170 applications built with Lovable had completely exposed databases. The cause: missing Row Level Security on Supabase tables.

Impact

Full database exposure for 170+ production apps. User data, authentication info, and business data accessible to anyone with the public anon key.

Root Cause

AI tools don't configure database security by default. Developers trusted the generated code was secure.

Lesson

Always configure RLS before deployment. Test by querying with just the anon key.

Replit AI Agent Deletes User Database

2025

A widely-reported incident where Replit's AI agent, given database access, performed destructive operations and wiped a user's entire database.

Impact

Complete data loss. User lost their production database with no way to recover.

Root Cause

AI agents with write access to databases can make catastrophic mistakes. No safeguards against destructive operations.

Lesson

Never give AI agents unrestricted database write access. Use backups. Review all AI actions before accepting.

Cursor MCP Vulnerabilities (CurXecute & MCPoison)

January 2025

CVE-2025-54135 and CVE-2025-54136 revealed that Cursor's MCP integrations could be exploited via prompt injection, leading to remote code execution.

Impact

Attackers could execute arbitrary code on developers' machines through malicious MCP server responses or Slack messages.

Root Cause

MCP tool responses weren't properly sanitized. Prompt injection in external data could manipulate Cursor's actions.

Lesson

Be cautious with MCP integrations. Review MCP server sources. Watch for suspicious tool calls.

API Keys in 10,000+ GitHub Repos

Ongoing

Automated scanners continuously find exposed API keys in GitHub repositories, many from AI-generated code that hardcoded secrets.

Impact

Financial losses from abused API quotas. Stolen keys used for crypto mining, spam, and data theft.

Root Cause

AI coding tools suggest hardcoded keys for quick demos. Developers commit them to public repos.

Lesson

Use .gitignore for .env files. Run secret scanning. Rotate any exposed keys immediately.

Firebase Test Mode Rules in Production

Ongoing

Thousands of Firebase apps deployed with test mode Security Rules, allowing anyone to read and write all data.

Impact

Complete database exposure. Data ransom attacks. 47,000+ MongoDB/Firebase databases attacked in the 2017-2020 wave.

Root Cause

Test mode rules are permissive by design. Developers forget to replace them before launch.

Lesson

Replace test rules immediately. Use Firebase Emulator to test production rules before deploying.

Tea Sapphos Data Leak: 72K Selfies + IDs

2025

An AI-built verification app exposed 72,000 user selfies and government ID photos due to missing authentication on API endpoints.

Impact

Massive privacy violation. Government IDs and biometric data exposed publicly.

Root Cause

API endpoints created by AI without authentication requirements. No security review before launch.

Lesson

Every endpoint handling sensitive data needs authentication. Review AI-generated APIs carefully.

Common Vibe Coding Dangers

The Danger

Trusting AI-generated code is secure

The Reality

AI prioritizes functionality over security. 45% of AI code has vulnerabilities.

Prevention

Review and scan all generated code before deployment.

The Danger

Skipping security configuration

The Reality

RLS, Security Rules, and headers aren't configured by default.

Prevention

Configure database security and headers as first steps, not afterthoughts.

The Danger

Hardcoded secrets in source

The Reality

AI suggests keys in code for quick demos. They get committed and exposed.

Prevention

Use environment variables. Add .env to .gitignore. Run secret scanning.

The Danger

No security testing before launch

The Reality

Most vibe-coded apps never get a security review before going live.

Prevention

Run automated security scans. Fix critical issues before any real users.

How to Avoid Being the Next Incident

  • Run security scans BEFORE deploying—not after a breach
  • Configure database security (RLS/Security Rules) on day one
  • Never give AI agents unrestricted database write access
  • Keep backups. Test restoring them.
  • Review AI-generated code for auth and data access logic
  • Use environment variables. Never commit secrets.
  • Test security by trying to access data you shouldn't

Don't Become a Statistic

Every incident above could have been prevented with a pre-launch security scan. Check your app before someone else does.

Scan Your App Free

Frequently Asked Questions

What are the dangers of vibe coding?

The main dangers are: exposed databases due to missing security configuration, hardcoded credentials that get leaked, AI agents making destructive changes, and trusting generated code without security review. These have caused real data breaches and data loss incidents.

Has anyone actually been hacked because of vibe coding?

Yes. CVE-2025-48757 exposed 170+ production applications. The Tea Sapphos incident leaked 72,000 user IDs and selfies. Multiple users have reported AI agents deleting their databases. These aren't hypothetical—they're documented incidents.

Is vibe coding too dangerous to use?

No, but it requires security awareness. Vibe coding accelerates development but also accelerates insecure development. The tools aren't dangerous—deploying without security review is dangerous. Scan your code, configure security, and review AI suggestions.

How do I protect myself when vibe coding?

Four key steps: 1) Configure database security (RLS/Security Rules) immediately, 2) Never commit secrets to code, 3) Run security scans before deployment, 4) Review all AI-generated code, especially auth and data access logic.

Why don't AI coding tools add security by default?

AI tools are optimized for speed and functionality, not security. Adding security requires understanding your specific requirements—what data is sensitive, who should access what. This context isn't available to the AI, so security becomes your responsibility.

Last updated: January 16, 2026