Over 40% of the latest YC batch are AI/ML startups
ignorance.aiI wonder if this gold rush is leaving openings for building simple solutions that don't need AI and emphasize traditional UX instead of conversational UI. If you have some domain knowledge, maybe you should focus on ensuring your product does a narrowly defined 100% job at what's needed, and it may shine when you're pitching against AI-based competitors whose products look like magic 95% of the time but are wrong or unusable the remaining 5%.
"We are not a black box that sends everything to someone else's giant API" can be a selling point as businesses become more aware of their cloud dependencies.
The basic simple (web) solutions were built over the last 10-20 years - of course not everything was invented yet, but if there is a need for such a product, there is a high chance the product was already built, and the niche filled.
Perhaps a better question is what new niches now open up, on top of the AI solutions.
Hugging Face is a nice example - it's mostly a python library, a github clone, and a discussion forum strapped together. "Old" tech used to serve a new and growing niche.
HuggingFace is also hosting models (CLI/code/API), have an extensive community driven dataset store, and has some great positive feedback loops.
Great Library -> Large Community -> Large Coverage of Models & Datasets -> Larger Community -> Larger Coverage of Models & Datasets -> Revenue -> More Engineers -> Greater Library ...
Not all the steps (->) here are trivial: Contrast HF to Explosion (great folks behind spacy).
> Not all the steps (->) here are trivial: Contrast HF to Explosion (great folks behind spacy).
Can you elaborate more on this? about contrasting HF to Explosion?
Explosion's core contribution (not moneymakers) is the great SpaCy library they first released in 2015. It was an excellent work, far better designed (IMO) than NLTK and other offerings at the time. Of course the library isn't monetized. SpaCy too has the ability to train custom models and use them.
This never transformed into a model hub. Despite a lot of people using SpaCy and probably building custom models.
Again in contrast, Explosion's other revenue stream (prodigy) is not a SaaS as well. Its a great software, and I presume it brings in a steady income. But in 2023, I would imagine that HF's LLM hosting, cloud training environment, brings in more money than Explosion's data annotation software.
I'll also add that Explosion has a heavy "production-ready, quickly" bent, and even supports wrapping HF models with spaCy. Explosion is probably my favorite company in the AI space and has provided the most tangible value of all the NLP tools I've used.
Surprisingly, there was only one (!) education startup in this batch. Doesn't bode well for us as we're applying in this Summer 23 cycle albeit with customers, revenue, PMF, etc.
Don't worry about that at all! It will help you stand out.
Thin wrappers around ChatGPT may be ok to start.
There's no point on creating your own GPT before validating your idea and getting some traction.
Amazon didn't build their logistics and freight division first to then start selling books.
Once your product or service is off the ground and more resources are available then you can start moving away from OpenAI.
This could be much simpler that you think, particularly if you designed your systems with this in mind and considering future costs of training LLMs will be a fraction of what they are today.
And most of those "companies" are shims over GPT. Zero moat, fake it till you make it mentality, gather investor money and maybe we'll roll our own in good time.
Don't know how I feel about that, it quacks like a bubble but there is massive value that can be delivered.
It's the model familiar from previous bubble rounds: 1) Get funding 2) Put a lot of effort into appearances and presence 3) Get acquihired by Google / Microsoft / et al.
The chatbot tools are unlikely to be faang material. I'd guess more likely the targets now are places like Salesforce that probably can find a way to claim value from adding some text generation.
Except step 3 is going to be a lot harder these days. You’d better have a strong story to sell something to Satya.
Sure. But as a VC you will look like a schmuck if you don’t invest in at least some AI startup.
> Zero moat, fake it till you make it mentality, gather investor money and maybe we'll roll our own in good time.
Isn't that the textbook definition of a startup?
I don't think so. The textbook definition, at least what I understand from the YC literature, is that they have a deep insight and technical ability to execute it, so by moving fast they create the moat. They aren't faking it, the potential of a company and market share is there in the ingredients but they lack the runway to quickly take off.
For example, Tesla vs Nikola. One set of founders had a powerful vision and technical ability to see it through (yes, even Elon, say what you want about him but he has deep technical insights, moreso for his cofounders). The other set had a good funding catchphrase "Electric cars, but with trucks!" and tried to fake their way forward using copious amounts of VC money. Many of these companies sound like "AI, but with ... !"
> They aren't faking it, the potential of a company and market share is there in the ingredients but they lack the runway to quickly take off.
How do you explain that the vast majority of startups fail, then?
IMO the best a VC can do is check that the startup they invest in is not an obvious scam. But then it's impossible to predict which technologies will work and which will not. So they just diversify, counting on the fact that a few of those startups will bring significant money at some point.
Sure, they like to share their "vision" and "predictions", just like traders probably like to share their opinion on the stocks they buy. But at the end of the day, they don't know, they just diversify their investment.
This is needlessly dismissive. How many SaaS startups are "shims" over linux, docker, or node.js? It's still the idea that counts. GPT is a massive enabler and this data clearly shows that. I get that the barrier to entry is at all time low, but is that a bad thing?
Shim over linux?
Right, how many products are just shims over the CPU instruction set.
Most kernels and some OSs.
i don't think that is true. ideas are basically statistics. someone will have a great idea somewhere, but until that idea gets executed, widely spread and tested by reality it will remain as if basically not having been had.
so no, i don't think that the idea is what counts.
Wasn't YC always explicit (at least if you read between the lines) about looking for founders with good pedigree regardless of having a feasible product?
Yes and that makes sense as an investment thesis. But what happens when you attract a bunch of ambitious people with great pedigree and then either steer them or select for jumping onto a hype bubble at it's peak with an undifferentiated cobbled together offering?
That's what concerns most about this, is how somehow it looks like they've optimized for a bunch of "follower" founders.
But I really wonder how many of these founders can convincingly explain in 5 minutes how gradient descent or transformers work. I know that's the first type of question I would ask any founder of an AI company before investing.
The typical thinking among businesspeople is that you can always "hire a nerd" for those things. The early YC was a departure from that with PG praising technical prowess in his essays so much. It would seem this mindset has much less sway now.
Do VCs understand that? Do they care? My feeling is that they want to see a projected growth that looks like an exponential, and buzzwords.
The combination of gradient descent and transformers is a weird choice to ask founders about. One is an almost trivial algorithm, whereas the other is a cutting-edge research result.
Yup. As far as I can tell, OpenAI is pricing at steep losses at right now. It makes me wonder what businesses and individuals will do once OpenAI runs out of money to subsidize everybody and massively increases their prices.
> As far as I can tell, OpenAI is pricing at steep losses at right now
How do you figure that? Given the speed that the gpt4 API returns data I feel like it's easily going to cost more than pretty pricy hardware.
well, paid users are increasing exponentially. 10 million paid users are 200 million monthly revenue already and it is just one source of their revenue.
OpenAI is nowhere near 10 million paid users, and $200M MRR doesn't necessarily mean you have any profit. We have no idea how expensive it is for them to serve that many people.
How do you know? Given the worldwide buzz and a small sample around me, I wouldn’t be surprised at all if OpenAI already had 10M paid subscribers.
> How do you know?
I don't know for sure, but I've been in software for decades and you don't just get 10 million people to pay for something in a few months. Conversion rates are always lower than people expect.
For something like ChatGPT, the math is going to be something like this:
- something like 150M unique users have tried it[1], which is probably < 50M people (I myself would look like at least 5 users)
- most will never try it again
- of the ones who try it again, a very tiny minority will be interested enough to pay for it because it's just a novelty to the vast majority of people (the only people I know using it for work are researchers and content marketers)
- of the ones interested enough to pay for it, only a fraction will
So I'd wager they're in the territory of 20k-500k users, not 10M.
> Given the worldwide buzz and a small sample around me
People who are educated, live in the West, and pay attention to news always dramatically overestimate the number of people with a similar media diet. ChatGPT is discussed in your circles far more than elsewhere. There are literally billions of people who haven't heard of it or don't care.
Don't let your anecdata fool you. Your sample is not representative of humans at all. 60% of the world lives in Asia, after all.
1. https://www.reuters.com/technology/chatgpt-sets-record-faste...
I don't think that our experience in software engineering applies very well to understand what is happening with ChatGPT.
Here is the math I do, starting from the addressable market.
There are 60M professional workers in the US [1].
The number is probably similar in EU and APAC.
This yields ~200M professional workers in the world.
If 5% of them have a paid subscription, that's exactly how you get 10M paid subscribers.
Of course, the debate is around the 5%. The rate in the pure tech industry is much higher (10-20% around me), and it is probably much lower in other areas (e.g. architecture). But 5% as an average is not unrealistic.
[1] https://www.dpeaflcio.org/factsheets/the-professional-and-te...
Good analysis. Thanks. It is something to think about for all AI startups.Adoption will take much longer than you think.
As far as I saw from helping the startups with Launch HNs this batch, many if not most of the AI ones were pivots—in other words, YC originally funded them to do something else. Just FYI.
VCs gonna fad. They want their businesses to go boom or bust within 24 months.
Not surprising. You can sit down and bust out something pretty cool in an afternoon using the infrastructure currently in place (openai, huggingface etc). Makes more sense than being the 57th SaaS web app in any given space.
If the bar for entry is that low how will these companies survive?
The probably mostly won't. But they don't care now. Now they want VC money. That's how startups work.
They only need to survive for a long enough time until they find an exit. Either through acquisition or IPO.
It doesn't need to be sustainable nor particularly innovative.
It's going to be tough to IPO when you're not sustainable or even profitable these days.
That's where the stat that 99% startups fail comes from. One or few winners, a hundred losers - the whole startup ecosystem works like this :)
The bar to making a social network weren't high from a technology perspective. Same for a lot of enterprise software.
Every investment fund is under strict orders to plow money into AI, because it's going to change the world
So over 40% of YC batch are working on making API calls to OpenAI?
Or building cheaper alternative to OpenAI.
Not as far as I can tell? Any examples?
Also, YC seed isn't gonna get you very far when compute alone for training an LLM is in the millions currently.
Do you know if it's actually true or did you make that up?
This is not going to end well.
Hope this bubble gives us at least two unicorns that don't enslaved poor people for a cut.
I'm sure everything will be just fine once the unicorns don't have to hire any labour more skilled than data entry but less skilled than a senior software engineer, and there's no feasible way for workers to exert pressure.
Where’d the VC return be on that
Unicorns usually become unicorns for a reason, right?
Eh, some will pivot and some will die, like always.
Maybe it's a good way for YC/HN to find out which things don't work, in a short span of time.
It is per design. One in hundred becomes a unicorn.
Which is not necessarily a good thing. The one that becomes a unicorn generally does not bring a lot to society. Would be nice to invest in constructive projects for a change.
There is a nie essay from PG about how venture funding kind of has to work this way. Also speaking from experience of a friend who tried it other ways.
If you want >90% of projects you invest in, you need to invest in conservative fields, and grab 30-70% very early on, because your returns will be way lower.
As for not bringing anything to society - you mean to say that tech field and our startups didn't bring worth to society? Seriously?
> you mean to say that tech field and our startups didn't bring worth to society? Seriously?
Wanna talk about surveillance capitalism? Ultra consumerism? The biggest startups in the world have created a world where the goal is to win people's attention to sell ads. The goal of startups today is not to make good and useful products, but to sell stuff. Usually it means that time to market is more important than quality. Remember fitbit? "Doesn't matter if the product is crap, as long as it is cheap".
Not everything is bad, but I think we can definitely wonder if the world wouldn't have been better off without those startups.
In what way?
Definitely not as some grim prognosis of AI ruling or something like that. More in a sense of seeing many of these startups popping up to take advantage of the AI hype, although most of theme being quite late to the party. And market traditionally overestimating the potential of AI an the crash that will follow.
One possible way it could end badly would be if OpenAI ramped it's pricing up to the point where it takes away most of those company's chances of ever being profitable.
I'm not saying it's going to happen, and I hope it doesn't, but when I did a little analysis for an idea I have it was absolutely the biggest threat I could see. OpenAI has the ultimate vendor lock-in - there literally isn't an alternative right now.
(This is a giant opportunity if you're a billionaire investor. Providing an AI backend is probably the next cloud infrastructure. OpenAI is going to be the AWS of that industry, but there's room for an Azure or a GCP too.)
There's really no point in worrying about OpenAI shutting down their API or pricing you out. It may happen but very unlikely that it will be suddenly and soon.
Yes, keep it in the back of your mind and design your systems in such way that when the time comes replacing the GPT doesn't require rewriting your entire codebase.
Your immediate #1 priority in such competitive market should be to execute your idea and get paying customers, then you deal with breaking off OpenAI.
OpenAI’s moat is not _that_ big in the grand scheme of things.
You can train GPT2 in a couple weeks on a dozen GPUs. Sure, it’s a lot worse than GPT4, but not insanely behind and the community is fast.
Same for RLHF. It is for sure fine engineering, but not unreachable for a motivated and competent bunch.
feel kind of sorry (or excited?) for this batch lol. i have trouble believing that many companies were already working on AI/ML stuff. basically huge land/cash grab right now. surely at this level there has to be a lot of overlap... it really feels like just throwing everyone into the pit and having them duke it out and see who wins, loser be damned. i suppose that's the VC play in general, just a bit jarring to see it front and center.
How many of those are just a minor prompt + API call to OpenAI?
The challenge of building, running, and growing a startup is rarely building some novel algorithm or complex machine. You can reduce almost all SaaS businesses down to "It's just a UI and a database!" if you want to, and many developers do. That only shows how easy it is to over-simplify things.
My first startup was literally just some Javascript and a Postgres database. It didn't even call any external services, let alone one that's truly on the bleeding edge of computer science. I still thought it was a viable business (wrongly, it turned out, but still... I had a dream!).
I find it funny some people being dismissive of what they built to defend the AI startups
"Some JavaScript and a DB" is a web service. It does stuff. Sure if it's small or a basic crud then the moat is small
But if I can replace the value proposition of a startup with "open chatgpt and type a short prompt" then there's not much value in it right. Maybe if there's value in the surrounding services.
If the business is literally just sending data to GPT and outputting the response then sure, but I've not seen any businesses trying that yet. Pretty much everything I've seen to date is "A CRUD app with GPT to fill in blank data". That's automatically more valuable than just a CRUD app.
That's ok.
This is how many successful businesses operate in the "atoms" world.
You take a product or service that's already popular and has a solid channel for distribution, you transform it or add additional value to it and sell it for a profit.
Think about a $8 precut, packaged pineapple at the grocery store.
Once you get your business going and get additional funding or generate cash flow, you can invest in getting rid of some of the middlemen so that you can take their profits too (start growing your own pineapples).
Sounds pretty straightforward, but the selling part is the difficult one.
Yes, I am aware, and I agree
The value proposition to the sliced pineapple is the time and skill to do it. Sure, it's "cheap" to do it at home (if your time is free and you have a good knife - I pay way less than $8 though ;) )
While I do believe there are startups with good value added, I think a lot of people are just throwing a simple call with very little thought added. And while the service is sellable, your moat is how easy people can duplicate it.
Is your service a product or a feature? If it's a feature OpenAI can just add it to their systems in a day
Is comparing to atoms world a good comparison though?
Especially when these products can be built by a school student in an hour using ChatGPT?
I think the barrier of entry to atoms world companies are quite higher than that.
I guess my point was we see examples everyday in the atoms world where people take an existing product, do some trivial work around it and create a multi million dollar business.
Generally yes, software is usually easier, although today you can start a new brand of vodka or cosmetics from your computer.
Customers acquisition is equally difficult for both though, so sales and marketing is where the secret sauce and differentiation is now, not so much in the codebase, I think.
Look, you’re not going to get investor money to train an LLM and scrape a ton of data sources at this stage
Is what it is
Probably more than 90%?
I hope one of them is using machine learning to recognize photos of melanoma. I was thinking in the shower this morning that would be a handy service.
The recent buzz is over generative AIs. That kind of medical research has existed for a long time, it's not just part of the buzz (or not directly, at least).
Always thought that YC was meant to pick the Crème de la Crème of ideas. Different times indeed.
They are still picking it up. It just happens to be sitting in the middle of a lot of noise.
Sure...
They sure 'really' are AI companies and totally not just wrappers around OpenAI or ChatGPT's API and giving Microsoft and Google more ideas to absorb for free with another AI bubble waiting to pop as soon as OpenAI increases prices. /s
Then we will see how unprofitable these so-called AI hype 'startups' are.
It is just ridiculous made up stuff. I cannot wait to see how much damage a bunch of these AI companies cause. For example, I checkout out one which I won't name which does predictive Youtube analytics where they predict how many views you might get based on thumbnails and they are off by multiple orders of magnitude. Like it's not even close.
Or these ai wrapper companies will get a head start on building the software technology to use LLM's while others work on getting these types of models (from open source or competing offerings) to run directly on our phones
Surely this technology will be used for the benefit of humanity. By 2030 we will be working 15 hour weeks and all people will have homes!
It will be interesting to see what happens. By not taxing capital gains at higher or similar rates to income most of the larger % of the new wealth being generated was moving to owners of capital rather than workforce already. Now this is going to compound the problem further as the last avenue to increase wealth is closed off for the people without capital.
(for the sake of critical thinking)
or rich get richer, poor get poorer and will not have access to AI tools, society will collapse, more jobs will be low paying, most on demand skills would be everything done by physical activity
> society will collapse,
The great promise of robotics and AI is the possibility of decoupling societal stability from the vagaries of the poor.
That's not the great promise of AI. The great promise is relieving humans from all types of labor and freeing their time to create, explore, imagine, love and live. But human time would only be useful for such goals if proper societal structures are put in place, which is a political issue, not a technological one.
In our current compounding-capital model, what you get is a depreciation of all human capital to zero, therefore everybody who doesn't already own capital is fucked for eternity because they have nothing to compound and get the ball rolling. Zero social mobility, absolute polarization between an AI-owning class and an own-nothing class for as long as revolution is prevented.
How?
By having armies of semi-intelligent armed drones, that will kill the poor if they decide to riot?
By bringing wealth so cheaply that it can be given away.
We already have many things that are given away for free, whereas a 100 years ago they were considered luxury (books, pens, papers, access to knowledge in general, and in Europe - public transport and healthcare, plus to some extent food, heat and electricity).
Would the wealthy and the powerfull be happy for everyone to become wealthy?
Why not?
If you’re a part of a startup ecosystem, you’ve seen how it works first hand - wealthy and successful sharing knowledge with the others.
Then that’s not wealth anymore, because wealth is relative.
Sounds like a good guess.
Computers, the web, mobile drastically increased the productivity of office workers, but somehow they don't work 15 hour work weeks.
Yeah humanity messed up somewhere. Instead of working 15 hours in a sustainable world, we work our ass off destroying it so that a few can get rich.
Surely YC will be the force pushing for that and not just the enrichment of a few people who are already insanely rich.
Most startups funded by YC are from first time founders, and definitely not by people post-exit. The whole idea, and value that the startup ecosystem brings is giving opportunities to anyone to enter the field and compete with the establishment.
For me, personally - before the startup ecosystem happened in my country, it was extremely difficult to break through. If you worked for a large entity - good for you. If you wanted to do something on your own - you weren't treated seriously, and it was very difficult to get through.
Now, people from anywhere, can visit meetups and hackathons to get up to speed, and gain connections. And accelerators like YC provide resources and mentoring to the ones that are the best.
Also, notice that YC has an open application, and they don't look at your social status, your wealth, nor your connections, when judging your application.
Of course it's not perfect - being from US, or the bay area, helps. Knowing people who got through YC and can help you prepare your application also helps.
But I haven't heard of anyone with a better idea for solving this.
Why do you think YC does what they do?
* applies to employees of AI companies only
* for the duration of their employment and not to yellow badge employees
Is there still gold to be had, or are they just enriching the shovel-sellers?
oh yeah I've noticed this trend on indiehacker. Lotsa lotsa new products being advertised as AI-powered and whatnot. So it doesnt suprise me a lot.
"We're So Lo Mo... I mean So Mo Lo"
The companies that stuck with paper and didn't adopt computers, the web and mobile went out of business. The same will happen in the near future with the companies that are unwilling or unable to adopt AI/ML in their business.
Some businesses absolutely have adopted IT with vigour and massively improved the way they work, but many haven't. They've just 'ported' their old paper processes to computers without learning or improving anything. Sending emails is faster than using paper memos but if you're just sending memos by email nothing is actually better. Using an Excel spreadsheet with no macros, formatting, or functions doesn't really give you much that double entry bookkeeping in a real book wouldn't already have given you. Plenty of people use Word with everything 'annoying' switched off, and they could do the same job on a typewriter.
Similarly, lots of companies spent thousands of dollars on a website or a mobile app that no one looks at because they're a small town landscaping business that really doesn't need those things.
The same goes for AI. There are definitely use cases where it will improve how we do things. For some businesses that will make them more profitable and more competitive. A lot of businesses won't adopt it for decades. Some will adopt it when it brings them no real benefits.
AI is a tool. People will often use it poorly, just like every other tool.
Typical grifter take, using analogies, which are always a bad framework to analyse and predict events and trends based on the history. A few years ago others grifters were saying the same about blockchain and NFTs, a year ago same thing about metaverse/decentraland.
Exactly. What was the corresponding percentage of Blockchain and/or Cryptocurrency start-ups say, 2 years ago?
Do tell - how do you analyse and predict events and trends NOT based on the history.
I'm puzzled by this take: surely the recent history - the cryptocurrency bubble - the hot air and lack of substance, the extractive scams, is useful lens for current events? But it's not everything, we can't be entirely "based on it" deterministically - as we do indeed "suck at predicting".
That’s easy, if you have a framework of knowledge around things like physics or chemistry or biology(add any domain here), one could predict outcomes of hypothetical use cases with considerable accuracy. Unless you lump all domain knowledge into “history” because it has to have happened in the past…
Do you think chemistry or biology knowedge was given to humans by the gods or the ancient aliens and not based on a history of practical knowedge, experiments and research?
Thats why I added the last sentence. But for the purpose of debate, it is reasonable to distinguish domains like physics from history. It is one thing to study how something was discovered, and another to understand the parameters around an event so that you can predict what might happen.
My point was, that saying “historical event X is somehow similar to ongoing event Y, thus we can reason and predict outcomes of Y based on X” is a false methodology, appealing, but false.
A first good step IMO is to accept that we suck at predicting.
We are great at predicting. The number one task of the huge human neocortex is predicting the future.
If you look at the past HN comments there were users who predicted the raise of OpenAI, the war in Ukraine and so on.
Predicting isn’t some black and white skill. The neocortex doing calculations in the background to predict how heavy a cup of coffee might be or how hot the stove is…is very different from consciously predicting geopolitical events which are based on countless other variables. In a world of 8 billion people of course you will see some get things right, and then conveniently point it out…but most people’s predictions are wrong, and no one pays attention to those results.
Did you forget a "/s", or do you actually believe that?
No, but they are playing word games. yes, the human brain is a great engine designed for predicting future events and working with that e.g. how to catch a ball that is coming towards you and will arrive shortly in the future, how to throw an object and hit a target, how to pick up a heavy object and move it to where you need it, etc.
But this is not at all the same kind of "predicting future events" as accurately saying where e.g. the CryptoCurrency or VR industry will be in a decade or so.
Remember when people paid good money for Second Life real estate?
I have yet to encounter a single valuable (in terms of hard currency) use case for GPT besides automating publishing. Yes, you can generate text automatically now in relatively high quality. But that's a business that was struggling before GPT came along and I don't think we actually need that much automatically generated bullshit content. Yes, a language model will help authors to write their articles/books quicker, but that's not a significant fraction of their time spent, hopefully.
I think language models will become a commodity, like a spell checker is today. They might open up the market for translations into more exotic languages (but I don't know how the quality is for such languages and by definition, the market ain't big) and I think every author will use one as part of their workflow (think of technical writers using a language model for the textual parts). Computer games might profit massively in terms of immersion.
No, I don't think that new applications will become that important with language models. I think it's the other way around. There are so many applications that we simply don't need any longer. Language models and image recognition models together will take over the user interface industry. Why bother with web design and user experience if you can just throw the chatbot at your client? That's going to become the real value here.
If I was a junior web developer, I would begin to seek out new programming languages and stacks.
Yes, but there's a massive difference between a startup focused on a broader business issue incorporating AI into its plan on tackling that issue and a company focused on AI as the primary product.
The former are going to mostly be successful, particularly in the place of companies that don't adapt, but the latter are mostly going to be obsolete by the time they'll be several years into it.
AI as a product is a TERRIBLE play. But AI as part of a solution is going to be necessary for nearly every business vertical.
Survivorship bias. You select technologies that won, and assume your new technology will be the same.
What about all those technologies that just died? Companies that did not embrace Minitel, blockchain or the metaverse seem just fine right now.
Except these companies are just thin wrappers over OpenAI APIs
Today they may be thin wrappers over OpenAI APIs, but if they start bringing in big money they can hire data scientists and build their own models.
That is a big "if", which doesn't cut it with startups looking for VCs, given 95% of these ML startups depend on OpenAI's APIs. What I am looking for is 'even if'.
So even if they make money AND hire 'data scientists', there is a higher chance that they will run out of money quicker since they are competing against Google, Microsoft and OpenAI which they all can afford to run everything at a loss for free and raise their prices which eats these startup's revenue.
Either way, they are locked into these APIs and they aren't focused on profitability to even begin building their own models at this stage. If they are not profitable by the time the price increases come in, they are going to be struggling for survival.
Frakkin' toasters everywhere!