How We Hardened Lotify, a B2B Car Dealer Marketplace, Before Launch
Lotify is a B2B marketplace where verified UK motor dealers post wanted vehicles, receive matching stock, and transact directly. It was built fast, on Next.js and Supabase, with an AI-assisted workflow. Before opening the doors at scale, the team ran a full Vibe App Scanner audit to find what the AI workflow had not.
At a glance
- Stack: Next.js, Supabase, Vercel
- Surface: dealer accounts, dealer-to-dealer messaging, request and response workflows, admin approvals, subscription-gated access
- Findings: a small cluster of high and medium severity issues, all configuration-level
- Outcome: hardened baseline reached without changes to application logic
The platform and the risk it carries
AI-assisted development is fast. That is its point. But speed compresses the part of the cycle where you would normally pause to harden defaults, and it produces apps that are functionally complete while quietly relying on whatever security posture happened to ship in the box.
The Lotify platform handles authenticated dealer accounts, private messaging between dealers, structured request and response flows, an admin approval pipeline, and subscription-gated access. That mix lines up with five well-known risk areas:
- authentication and password policy
- database access control (Row Level Security on Supabase)
- public exposure of configuration values
- protection of authentication endpoints against automated abuse
- baseline browser security controls (security headers)
The goal was not a full penetration test. It was a structured baseline review of the categories that consistently slip through when applications get built quickly with AI in the loop.
What the scan found
The scan returned a small number of high and medium severity issues. None of them were structural. Every one was the kind of finding that comes from a default that was never tightened, or a control that was never added. That is exactly the failure mode we expect from this delivery model, and it is what makes a pre-launch scan such a high-leverage step.
The findings clustered into four areas.
1. Password policy hardening
The default Supabase auth configuration accepted weak passwords. Short values were allowed. Numeric-only values were allowed. For a platform built on verified-dealer trust, that is the wrong baseline.
The remediation:
- increased the minimum password length
- required character complexity
- enabled stronger validation rules in the Supabase auth settings
This closes off the easiest credential-based attacks: dictionary attempts and short-key brute force.
2. Authentication endpoint protection
Authentication endpoints were reachable with no friction beyond basic rate limiting. That is enough to deter casual attempts but not enough to deter an automated tool pointed at the login or password-reset flow.
CAPTCHA protection was added across the three flows that matter most:
- registration
- login
- password reset
Rate limiting thresholds were also reviewed and tightened so abuse hits the brakes earlier.
3. Security headers
Baseline security headers were not configured at the start. These are cheap to add and remove a long list of opportunistic browser-level attacks.
Implemented:
- X-Frame-Options: blocks clickjacking via iframe embedding
- X-Content-Type-Options: blocks MIME type sniffing
- Content-Security-Policy: introduced in report-only mode first, ahead of enforcement
- Referrer-Policy and Permissions-Policy: tightened defaults around referrer leakage and unused browser features
Rolling CSP out in report-only mode first is the right move. It catches violations from real traffic without breaking anything, and gives you a clean policy to switch to enforce mode later.
4. Configuration and exposure checks
Several findings were not corrections so much as confirmations. The scan flagged things that looked off and required a human to confirm intent:
- cross-domain form submissions: confirmed as intentional, driven by domain canonicalisation
- environment variables visible in the frontend: reviewed and confirmed to contain only public-safe keys (Supabase anon key, public URLs)
- no sensitive credentials reachable from the client
Public Supabase keys are by design
The Supabase anon key is supposed to be visible to the client. Security on that key is enforced by Row Level Security policies in the database, not by hiding the key. The validation step here was confirming RLS was actually doing the work, not assuming it.
Outcome
After remediation, the application moved from a default, development-oriented configuration to a hardened baseline appropriate for production. Three things stood out about the work:
- no architectural changes were required
- every fix was at the configuration or control layer
- core application logic was untouched
This is the consistent shape of the work for AI-built applications. The gaps are not in the architecture. They are in the defaults that nobody overrode and the controls nobody added.
Takeaway
AI-assisted workflows produce applications that are fast to build and functionally complete. They do not, by themselves, enforce production-level security standards. For Lotify the required work was modest but necessary:
- strengthen authentication
- add protection against automated abuse
- implement standard security headers
- validate configuration boundaries
The pattern is straightforward: build quickly, validate externally, harden selectively, then proceed to scale. Tools like Vibe App Scanner give you a structured way to run that validation step before traffic, in environments where development speed is high and security configuration tends to lag behind.
Run the same scan on your app
VAS runs the same checks we ran on Lotify against any URL: Supabase RLS, auth endpoint protection, headers, secret exposure, and more. Get a structured baseline before you launch.