I've been poking at vibe coded products for a while now, but last Wednesday I decided to do something systematic. I went on Reddit, found three products that had been posted by their founders that day, all built with AI coding tools, all live and handling real user data or real money. I signed up to each one as a normal user. No automated scanners, no exploit frameworks. Just a browser, dev tools, and about six hours.
All three had critical security vulnerabilities. Two of them could be fully compromised with a single API request. One of them had its entire product dataset, 681,383 records, downloadable by anyone with no authentication at all.
I disclosed everything responsibly. Each founder was contacted, given specifics through private channels, and given time to fix the issues before this was written. I'm not naming the products, the founders, or the specific URLs. What matters here is the pattern, because the pattern is the story.
Product A: The Marketplace
The first product is a small e-commerce marketplace. Live, accepting real payments through Stripe Connect, 77 seller accounts registered, 38 products listed, a handful of sellers with connected Stripe accounts actively accepting money. Built on Supabase. Posted on Reddit by the founder looking for early users.
I signed up with a normal email, poked around, opened dev tools, and within about twenty minutes had identified the core issue. The Row-Level Security policy on the sellers table allowed any authenticated user to update every column on their own database row. Every column. Including is_admin, is_blocked, stripe_account_id, and stripe_charges_enabled.
To be clear about what that means in practice. A single GraphQL mutation, the kind you can fire from a browser console, let any logged-in user do the following: grant themselves full administrator privileges on the platform, overwrite the Stripe account ID on their seller record to redirect payments to an attacker-controlled account, unblock themselves if an admin had banned them (rendering the entire moderation system useless), and mark themselves as Stripe-verified without completing Stripe's identity verification process.
No exploit chain. No privilege escalation trickery. No chaining of multiple bugs. Just update the fields. The database let you.
I confirmed the admin escalation worked by flipping is_admin to true, verifying it via the platform's own RPC function, and immediately reverting it. The Stripe account ID was set to an obviously fake test value. Everything was documented and reverted within seconds.
I then enumerated the blast radius. Of the 77 seller accounts, 8 had connected Stripe accounts. Three of those were real sellers with profiles, product photos, shipping policies, and active payment infrastructure. One had detailed notes about shipping sterile products via USPS Express, which suggests a real physical goods business. Two had stripe_charges_enabled set to true, meaning they were actively accepting customer payments through the platform at the time of testing.
Those three sellers could have had their payment routing silently hijacked by anyone who bothered to sign up.
The GraphQL introspection was also fully enabled for anonymous users, which meant the complete database schema (100 types, 17 mutations, all five tables with every column name and type) was browsable by anyone before they even created an account. An attacker would know exactly what to target before writing a single request.
I contacted the founder through a public Reddit comment, kept it vague ("serious backend security issue, user data exposed, authenticated users can modify security-critical fields"), and offered to discuss details privately. Their response: "I'm having my tech lead look into it."
I would bet real money the tech lead is Claude.
Why Supabase keeps producing this bug
This is worth dwelling on because it is not a one-off. The root cause is Supabase's fundamental security architecture. The pitch is "skip the backend, talk to the database directly from the client." That sounds brilliant until you realise what it actually means: your security boundary is now a set of SQL policies scattered across a dashboard, and the default posture is open.
In a traditional backend, if you want users to update their display name and bio, you write a /update-profile endpoint that accepts display_name and bio and nothing else. The attack surface is visible because you defined it. If someone sends is_admin: true in the request body, your endpoint ignores it because you never wrote code to handle it.
With Supabase, the attack surface is the entire table schema minus whatever you remembered to lock down. If you set an RLS policy that says "authenticated users can update their own row" and forget to restrict which columns, congratulations, you've just given every user write access to every field on their record. Including the ones that control money and permissions. The developer who built this marketplace didn't write a policy that was wrong, exactly. They wrote a policy that was incomplete. And incomplete, in a default-open system, means vulnerable.
Product B: The AI SaaS Platform
The second product was an AI-powered SaaS tool. Different founder, different Reddit post, same day. This one was built on a standard backend (Python, hosted on Render) rather than Supabase. It had a chat-based AI interface, an onboarding workflow that scraped and analysed websites, and a credit-based billing system through Stripe.
This one was worse. Not in a subtle, misconfigured-policy way. In a "the development endpoints are still in production" way.
Two endpoints stood out immediately.
POST /api/auth/mark-paid allowed any authenticated user to mark themselves as a paid customer. No admin check. No Stripe webhook verification. No payment proof of any kind. Call the endpoint with any valid JWT and you are premium. The response even helpfully confirmed it: {"message": "Payment status updated", "has_paid": true}.
POST /api/users/me/credits/add allowed any authenticated user to add arbitrary credits to their own account. The amount was passed as a query parameter. No upper bound was tested, but amount=1 worked fine, returning: {"success": true, "credits_added": 1, "old_balance": 1845, "new_balance": 1846}. An attacker could presumably pass amount=999999 and get effectively unlimited usage.
These are not security vulnerabilities in the traditional sense. These are development and testing endpoints that were never removed before the product shipped to production. They exist because someone (or something) needed them during development to test the billing flow without processing real payments. And then nobody reviewed the codebase before deploying. Nobody had to, because nobody wrote it. Nobody held the mental model of what was exposed.
But those two endpoints were just the beginning. The application returned raw internal debug logs directly to the client through its workflow API. I could see full server file paths (/opt/render/project/src/backend/projects/427/onboarding/...), the names and versions of every prompt in the Langfuse prompt registry (onboarding-brand-analysis v1, onboarding-company-founders v1), which AI models were being used and in what order (Gemini 3 Flash, fallback to claude-sonnet-4-5), confirmation that specific third-party API keys existed (Apify API key found, ScrapeGraph API key found, Serper API key found), internal credit balances for third-party services (ScrapeGraph credits before: 27723), and every search query the system executed, verbatim, logged to the client.
The full Swagger documentation was publicly accessible at /docs, no authentication required, mapping out all 130-plus endpoints with complete request and response schemas. An attacker could browse the entire API surface in a web browser before writing a single line of code.
The orchestrator chat endpoint accepted client-controlled conversation history, meaning you could inject arbitrary assistant or system messages into the history to manipulate the AI's behaviour. The client could also specify which AI model to use via the POST body, which is a cost-abuse vector if the server doesn't validate it.
The observability integration was completely broken. Thirteen separate Langfuse errors in a single workflow run, all saying 'Langfuse' object has no attribute 'generation', which is a version mismatch between their code and the Langfuse SDK. They're paying for observability and getting nothing. Flying blind.
HTTP 403 errors from Crunchbase were being treated as successful scrapes. The error page HTML was parsed as valid founder data and fed into downstream AI analysis. The sitemap parser crashed on malformed XML and the workflow continued as if nothing happened. Character encoding corruption was flagged seven times in one run and ignored every time. The entire pipeline had a pattern of silent failure: errors were swallowed, garbage data flowed downstream, and the system reported success.
I tested the onboarding flow with deliberately adversarial inputs. The founder discovery module took the company name at face value and searched LinkedIn for it verbatim, finding people who had used the word in completely unrelated contexts, classifying them as founders, and passing that data downstream as fact. No input validation, no relevance scoring, no semantic filtering. The AI analysis at the end of the pipeline was built on entirely incorrect data and presented it with complete confidence.
Disposable email addresses were accepted at registration with no domain validation. No content moderation on user inputs to the AI. The payment and subscription state was internally inconsistent (has_paid: true but subscription_status: null). Session metadata claimed one AI model was used while the logs showed a completely different one.
This is not a product with a security bug. This is a debug build running in production with a payment form attached to it.
Product C: The Jobs Platform
The third product was a jobs and salary benchmarking platform. Also Supabase. Also posted on Reddit by its founder that same day.
This one didn't require authentication at all.
The Supabase anon key, which is embedded in the frontend JavaScript and visible to anyone who opens dev tools, granted full read access to every table in the database. No user account needed. No login. No token. Just the public key that ships with the frontend code, which Supabase explicitly designs to be public with the expectation that RLS policies will restrict what it can access.
The RLS policies were either disabled or set to allow everything.
Using simple paginated GET requests, I downloaded the entire database in a single scripted pass:
The salary_benchmarks table contained 681,383 records totalling 489 MB of JSON. This is the platform's core product data. Salary ranges broken down by role, seniority level, city, company size, and company stage. Each record includes a source field that reveals how the benchmark was derived, things like "Derived from DevOps Engineer +12%", which exposes the entire methodology. The confidence_score and data_version fields reveal internal quality metrics. This data represents whatever collection and computation effort the founder invested in building the platform, and it was downloadable by anyone.
The companies table had 1,718 company profiles with logos, locations, industries, and full descriptions. The jobs table had 1,571 active job listings with descriptions, application URLs, and applicant counts. A role_mappings table with 102 entries revealed the internal taxonomy that normalises job titles to benchmark categories. And a jobs_backup table with 59 older job listings was accessible via the API, an internal migration artifact that was never removed from the public schema.
No rate limiting was observed. A competitor could replicate the entire platform's dataset in under an hour. Once it's been downloaded, there is no way to un-expose it.
I did not test write access. But given that read access was completely unprotected, there is a reasonable likelihood that INSERT, UPDATE, and DELETE operations are similarly open. If anonymous users can modify the salary benchmarks or inject fake job listings, the severity escalates from data exposure to data integrity compromise.
The Pattern
Three products. Three different founders. Three different architectures. Same failure mode.
The code was generated, not understood. When you write code yourself, even badly, you build a mental model of how the system works. You know where the boundaries are because you put them there. You know what's exposed because you exposed it. When an AI generates your backend, that mental model doesn't exist. Nobody holds it. The code might work. The features might ship. But nobody can answer the question "what happens if an authenticated user sends a request you didn't anticipate?"
The answer, in all three cases, was: whatever the attacker wants.
Product A's developer didn't write a bad RLS policy. They wrote an incomplete one. In a system where incomplete means open, that is the same thing. Product B's developer didn't intentionally ship testing endpoints to production. They just never removed them, because they never reviewed what was there. Product C's developer didn't decide to make their salary data public. They just never configured the access controls, because the framework's documentation makes it easy to skip that step and still have a working product.
None of these are exotic attack vectors. I didn't use any tools that aren't built into every browser on the planet. The entire exercise took an afternoon. These are the lowest-hanging fruit in application security, the stuff that a basic code review or a 30-minute pen test would catch. They shipped anyway, because the workflow that produced them doesn't include those steps.
What The Industry Is Saying (And What It's Ignoring)
Ryan Dahl, creator of Node.js, tweeted in January that "the era of humans writing code is over." It got 1.5 million views. Sam Altman told an audience in India that superintelligent AI is a couple of years away. Anthropic's CEO said software engineering would fundamentally change within months. Stripe published a blog post claiming their AI agents submit over a thousand pull requests per week, unattended.
Meanwhile, there's a thread on r/ExperiencedDevs right now with over a thousand upvotes titled "What's the mood at your company?" The responses paint a picture that none of the above want to talk about.
One developer described being told AI-generated code had been tested, trusting it, and having it fail. When they asked for a postmortem, the writeup was obviously LLM-generated. Their summary: "I'm essentially being told we trust AI over you, unless AI gets it wrong, in which case it's your fault and you'll be expected to fix the issue in the time frame we expect AI to get it done."
Another developer lost two job offers for being honest about AI limitations. The feedback was explicit: "We are pro-AI here, and we don't think you'll be on board." That same developer then mentioned that their biggest customer contractually forbids AI-generated code in the product. The industry is simultaneously punishing people for questioning AI and contractually prohibiting its output.
A senior developer said the only change they've noticed is their job getting harder because colleagues keep pushing bad AI code into the codebase and they're the only person who still cares about quality.
Management at multiple companies is demanding 2x, 3x, even 5x productivity increases based on AI alone, with no ability to explain what that means or how to measure it. One commenter interpreted a "20% productivity increase" memo as meaning a fifth of the team would be gone soon. Another pointed out the obvious game theory: if you demonstrate you can do 3x the work with AI, the rational management response isn't to pay you 3x more. It's to cut two thirds of the team.
New company leadership at one firm announced a goal of "zero manual coding" for the year. This from executives who, according to the commenter, "struggled with a PowerPoint presentation and zero technical skills." The engineering team had a good laugh about it afterwards. The executives were not joking.
The Security Implications Nobody Talks About
There is a conversation happening about whether AI will take developers' jobs. There is almost no conversation happening about what it means for security when the people building products don't understand what they've built.
The three products I tested are not outliers. They're the inevitable output of a workflow that prioritises shipping speed over system understanding. Prompt, generate, deploy, post on Reddit, hope nobody looks too closely. The frameworks actively encourage this by making it easy to have a working product before you've thought about access control.
Supabase's "skip the backend" model is particularly dangerous because it inverts the security default. A traditional backend is closed by default. You have to explicitly create endpoints and explicitly decide what data to return. Supabase is open by default. You have to explicitly lock things down, and if you forget, or if the AI that generated your schema didn't think about it, everything is exposed.
But this isn't just a Supabase problem. Product B had a completely different architecture and was just as broken. The common factor isn't the framework. It's the absence of a human who understands the system from end to end.
Security isn't a feature you bolt on. It's a property that emerges from someone thinking through "what could go wrong?" at every layer. RLS policies, endpoint access controls, input validation, error handling, authentication scoping, removing development scaffolding before shipping. These aren't tickets you can prompt into existence. They're the consequences of understanding. And understanding is exactly what gets lost when nobody reads the code.
The Tip of the Iceberg
I want to be clear about something. I did not go looking for these three products. I did not search for "vibe coded app" or "built with AI" or trawl through launch posts hunting for targets. I was browsing Reddit. Three products appeared in my feed, posted by their founders on the same day. I signed up to each one and poked around. Three for three.
That's the part that should worry people. This was not a targeted audit. This was the equivalent of walking down a high street, trying three door handles, and finding all of them unlocked.
AI coding tools have done something unprecedented. They have made it possible for people with no technical background to build and ship functional web applications. That sounds like democratisation, and in some ways it is. But "functional" is doing an enormous amount of heavy lifting in that sentence. Functional means the pages load, the buttons work, the forms submit, the Stripe checkout processes a payment. Functional does not mean secure. Functional does not mean that the database isn't wide open. Functional does not mean that the testing endpoints got removed before launch. Functional does not mean that some bloke with a browser can't grant himself admin in thirty seconds.
The people shipping these products are not stupid. They are often smart, motivated founders who have identified a real market need and moved fast to address it. The problem is that they have been told, by the AI companies, by the tech press, by the influencer ecosystem, that AI can handle the engineering. So they trust it. They prompt Claude or Cursor or Copilot to build their backend, and the backend works, and they ship it. They don't review the security configuration because they don't know what security configuration looks like. They don't pen test because they don't know pen testing exists. They don't audit their RLS policies because they don't know what RLS is.
This is not a failure of individual diligence. It is a systemic failure of an industry that is selling power tools to people without safety training and calling it progress.
And it is getting worse, quickly. Every week, more non-technical founders ship more AI-generated products. Every Supabase tutorial that skips the security chapter produces another marketplace with writable admin fields. Every "build a SaaS in a weekend" YouTube video that doesn't mention access controls produces another platform with development endpoints in production. The volume of vulnerable applications entering the wild is accelerating, and there is no corresponding increase in the number of people checking whether any of it is safe.
If I spent a week actively searching for vibe coded products and auditing them, I would find hundreds with critical vulnerabilities. I know this because I found three without trying. The ratio is not going to improve when I start looking on purpose. I genuinely do not want to go looking, because I know what I'll find, and responsible disclosure doesn't scale. You can contact three founders in an evening. You cannot contact three hundred.
The AI coding companies bear some responsibility here. Not for making the tools, the tools are genuinely useful when wielded by someone who understands what they're building. But for the marketing. For the implication that you can go from idea to production without understanding what production means. For every demo that shows a working app in ten minutes and never mentions authentication, authorisation, input validation, or access control. For creating the impression that security is something the AI handles, when in reality the AI doesn't think about security at all unless you explicitly ask it to, and even then it gets it wrong half the time.
The result is an epidemic. Live products, handling real user data and real money, with no security review, no code audit, no threat modelling, built by people who don't know what those things are, running on frameworks that are open by default, deployed to production in hours. It is the wild west, except in the actual wild west, people at least knew the doors didn't have locks.
The Uncomfortable Maths
At the top of the AI hype cycle, the numbers look like this. Anthropic just raised $30 billion at a $380 billion valuation. The hyperscalers are projected to spend $3 trillion on data centre infrastructure by 2028. CoreWeave's interest payments are six times its operating income. AI-related debt issuance is consuming a third of the US investment-grade bond market.
All of this spending is predicated on AI generating enough value to justify the investment. A significant chunk of that value proposition is that AI can replace or dramatically augment software developers. The pitch to investors, to management, to the market, is that AI-generated code is good enough to ship.
I found critical vulnerabilities in three out of three AI-built products in a single afternoon. The sample size is small. But the hit rate is 100 per cent, and the vulnerabilities were trivial to find. If this pattern holds across the thousands of vibe coded products being shipped every week, the industry is building on sand. Not just technically, but financially. The value proposition that justifies trillions in infrastructure spending is producing code that can't survive a casual security review.
The era of humans writing code isn't over. But the era of humans understanding the code that runs their products might be. And that is significantly more dangerous than anyone raising money on this thesis seems to realise.
Disclosures: All products were tested under normal user access obtained through standard signup flows. All vulnerabilities were disclosed to the respective founders through private channels before publication. No destructive actions were taken. All privilege escalation tests were immediately reverted. Product names, founder identities, and specific URLs have been withheld. No exploit code or step-by-step reproduction instructions are included in this article.
This analysis is based on direct testing performed on February 19, 2026. Industry data draws on reporting from Reuters, Axios, The Information, Bloomberg, and SEC filings. The r/ExperiencedDevs thread referenced was active as of the same date.