TL; DR
I am a senior MLE at meta and was (temporarily) laid off
I spent roughly the last week (Oct 30 to now) vibe coding BobaLearn
I don’t have UI experience, but I have deployed and built fullstack apps before
I wanted to see what the craze is about and how good the AI code editors really are. Here are my takeaways:
0 to online is crazy fast now: within 10s using lovable I was able to get something deployed with most of the functionality I wanted - lovable cloud handles hosting, DB, auth
As usual, these apps are good but get you 90% of the way. Details still matter
Infra and UI are especially important: spent 1 day migrating off lovable to cursor, 1 day moving to supabase, and 1 day perfecting UI details on mobile
I was able to prompt in order to fix UI issues, which is a step function improvement that make me very bullish on the future of AI code generation
At the same time, AI slop fatigue is real - for the canny consumer, 99% vs 99.9% matters.
I spent time fighting URL auth redirect errors, wrangling API keys and supabase deployment
This implies that highly technical, precise and critical work will still require human oversight, and therefore, human employees
Cursor vs Lovable
Lovable is a “true” no-code platform, that tries as much as possible to abstract away any developer intervention. However, it’s quite limited once you want to go off the happy path, and you are beholden to vendor lock-in (at a huge premium - i shelled out 25$ for the paid plan, which gives you something like 250 messages/edits, effectively 10c per message). At the same time, you can move exponentially faster since hosting/infra/domain is handled for you.
Cursor is amazing. Features version control, PRs, picture chat, and an extremely generous (but time limited) free trial. I built most of BobaLearn just within the 7 day free trial. Oh and also parallel agent processing (multiple agents editing the code base at once). Recommendation: pick cursor, with caveat
Lovable handles hosting and deployment, which really solves the problem of “i have a .tsx file, how do i show it to others?”. They really nailed the 0 to 1 in under 10s. Even though I spent a lot of time migrating off the platform, I think the sheer speed was worth it.
Caption: You feel like a 20x-er when using these AI code editors. I hit 285 commits in just a few days (almost none actually coded by myself).
Cursor’s UI and planner
caption: cursor IDE at work
When you first install cursor, it asks you whether you would prefer chat based or file based editor. This is a huge UX decision right at the start, and it’s a very canny choice. I picked chat based since Lovable is also like that by default. As a result, most of the time I did not know the file directory structure (which matches other trends in the industry) of my app—and it didn’t matter. This type of behaviour separates planning from execution, which is very different compared to what I am used to as a MLE.
The planner: for more complex edits, cursor launches a planner module to break down the problem. Seeing the steps is extremely satisfying and interpretable; but cursor could provide more reasoning traces of what the underlying agent is doing.
Will vibecoding replace all SWE jobs?
Short answer? No.
Long answer, also No.
The role of SWE will evolve in the future but you will still need a technical person to review the AI’s code, and figure out what to do when things go wrong, which is effectively the same as having a SWE. Additionally, knowing what to prompt is quite critical. We will probably need fewer SWEs though.
What types of roles will there be in the future?
The ideal YC startup will now consist of a PM/light technical and an infra/details person. The PM will be able to vibe code the app and the infra engs can troubleshoot when things go wrong.
Gemini vs Gemini flash lite
Google offers a reasonably generous free tier for gemini API calls. However, latency was 5s+ when generating a story (of a few hundred tokens). I had originally thought that this free tier had increased latency but switching from gemini-2.5-flash to gemini-2.5-flash-lite reduced the latency noticeably (>5s to <2s)
Backend frontend architecture
Backend - supabase. Handles DB + edge functions
Frontend - i deploy using lovable cloud (soon to be migrated to vercel/netlify). React and typescript are used to build the rest of the app
Those are all the main thoughts I wanted to share. Feel free to use AI to expand the bullet points.


