Will AI usher in an era of explosive economic growth?
vox.comNo. Undoubtedly No. There are some applications that will benefit directly from this technological improvement, but it is not a widely applicable as everyone is making it to be. In the areas what it is broadly applicable, it will improve efficiency and also a loss of a few jobs, which might make it mostly neutral from an economic point of view.
Those lost jobs might mean other markets have less demand.
No. It will descend into scamming sweatshop bot offices in the dusty reaches of the world thinking they can make a faster buck by using fewer employees than when before ai (think along the lines of (peopleperhour, mechanical turk, call centres, help centres, it software support).
In a perfect world, consumers would not tolerate bad support/service now that anyone (literally anyone) can learn to provide good service with an AI. The bad companies would go out of business, and the few struggling with the consequences of incompetent support would learn that courts now define insufficient support as gross negligence.
Ironically more production does not always lead to growth. A thought experiment: if you have everyone a machine to produce miniature bombs, would there be economic growth? There would be a huge increase in the number of bombs produced (creating GDP growth) but then a huge loss of productivity due to destruction and extra steps to protect things from the copious amount of bombs used. Overall, this would lead to a loss of wealth and productivity. I could see this happening with AI as well, but where there fewer moral barriers to using it than a bomb.
Your bomb analogy sounds like the modern military industrial complex…
Current LLMs are a "blurry JPEG of the Internet", essentially a fuzzy index instead of the character-accurate Google Search index.[1] Both have their utility, with a lot of overlap.
LLMs can't replace most human jobs in the same way that Google didn't replace most human jobs. However, many people become more productive thanks to modern web search and a few people did lose their jobs or were downsized. Nobody hires a research librarian in a private company these days because employees are expected to do their own searches!
The same thing will happen with LLMs. It'll be an alternative to Google Searches and perform much the same function, extending the capability to fuzzy searches and contextual searches. It'll be integrated with character-accurate indexes, and then there will be one "ask the Internet" product. It'll be useful. It'll make everyone more productive. I don't think it'll replace any of us any time soon. Maybe in 15+ years, but not next year.
[1] Most of the criticism I've seen of LLMs stems from a misunderstanding of what they do and how they work. People expect character-accurate output, such as URLs and references. It's not an index, it doesn't work that way!
Not sure why everyone is saying no, the answer is obviously yes. Having infinitely patient and accurate brains that can spin up in a dime will be the greatest economic boon we have ever seen.
Okay woah.
Absolutely nothing about our current generation of AI is accurate at all.
67% even 87% on synthetic benchmarks does not intelligence make.
It’s all statistics based, it’s not infinitely accurate, nor do we have any reason to think any AI system would exhibit anything resembling patience. They don’t exist outside of inference time, let alone have a sense of the passage of time.
Sure today it is, what about in a year or two years? And yes AI models are way more patient than humans for the exact reason you mention, the logistics of that don't matter, the result does
Sure, that would be true. I think people are more interested in talking about the AI we actually have today.
Sure but we aren't far off from a GPT-5, this year we will see much more capable models
Bird brains, maybe. I'm not sure what your expectations of GPT5 are but I meet GPT4's limits every day.
so take me for example.. A single father. how will AI make Me more money?
GDP growth doesn't necessarily mean any one person will be better off. But regardless, if you, say, build something with the help of AI that you couldn't do before, it could make you money, depending on how much you sell.
The ease with which he can use AI is the same as for everyone else. Therefore he gains no relative benefit and can't "make money from AI".
However, information-retrieval and "thinking assistance" helps everyone at the same time. Asking for summarization and clarification of any (legal or technical) document, finding the right references, and so on. It's like everyone gets instant access to a research department.
This means fewer people are needed for "clerical jobs". Effectively, it diminishes the economic value of memorisation and mechanical information work. But that's probably good a thing.
> The ease with which he can use AI is the same as for everyone else. Therefore he gains no relative benefit and can't "make money from AI".
Not really, people are not equal in output. Some people can use their tools much better than others, like juniors versus master craftspeople. Even in startups we see some succeed while many fail, and while there may be macroeconomic conditions (or otherwise those outside of one's control) for such failure, some still rest on the founders themselves and their decisions.
just because there is growth doesn't mean the average person will get to partake in it. The economic model will most likely have to be adjusted.
What will it cost to run said brains? But, much more intriguing, what is this source of infinite accuracy?
depends. The issue is that LLMs effectively “centralize” the functionality into a small set of models, which means if the costs of service drop too much compared to increase in demand, then GDP may perversely decrease. It all depends on whether AI actually increases net demand
How will there be economic growth if humans will keep losing their jobs due to AI?
You know we do need a customer base of actual human beings to sell things to. AI buying AI products AIn't gonna cut it.
This is why I believe capitalism cannot survive AGI
Economic activity benefits the people who, in the end, consume the products and services. Today, there are long and intransparent supply chains but I think they mostly serve this goal. Economic growth, however does not only require better products and services in the pipeline. It also needs consumers who are ready to adopt these innovative products. A significant part of the world has aging populations (China, Japan, western Europe). There, a large fraction of the population might not adopt new and innovative life styles.
I think this is the reason why every innovation takes time until it's faded in completely and the entire society benefits from the innovation. So, neither AI nor other innovations will usher in an era of explosive growth.
I can honestly say using copilot and chatgpt semi regularly as an aid has helped increase my productivity by a nonzero amount. If this is the case for a large portion of people who work with computers regularly then I think it could very well have an upward trend on economic growth maybe just not super explosive. Yet what percentage of increase in productivity would be the threshold for "explosive" growth? 50%, 10%, or even 1%? With things like copilot for office or having a chatgpt button built into your laptop making the use of AI so easy and seamless, many computer users may simultaneously experience a boost in productivity. It may just be less noticeable than one would expect.
To a first approximation GDP is what households consume (and household consumption is kind of the point of GDP).
How will AI increase aggregate household income explosively? Creating a few more billionaires is just measurement noise, not even visible in the trend line.
This feels like 1998. Give it a decade for the real businesses and use cases to emerge.
I've been thinking of this recently from the perspective of AI as the new mechanization. Not a brilliant idea, but the reason luddites destroyed the looms was because they saw themselves as mechanical beings. Most of the work done in society was mechanical. We made things. Very few people were responsible for thinking.
Now, society is information based, and we see ourselves as the thinking machines.
Just as the industrial revolution didn't remove humans from all mechanical work, AI won't remove us from all knowledge work, but I believe it will uncover the next level of humanity. If we're not only mechanical, and we're not only cerebral, what are we?
The luddites did not destroy the looms because they objected to the idea of mechanized looms, but because they objected to the politics and exploitation pushed by those introducing the looms: https://www.kirkusreviews.com/book-reviews/brian-merchant/bl...
You realize we are mechanical, thinking beings. What else is left for us to do?
Social engagement. There has always been a small segment of people able to make a life that way, but with the other two concerns out of the way it opens the floodgates.
This is the conclusion I've been coming around to as well.
> There has alwasy been a small segment of people able to make a life that way
This is the same with the move from mechanical to thinking. There was always a small segment of "thinkers" for everyone else, it was secondary to their mechanical abilities.
What has me thinking now is, if this is true, when we figure out how machines can be social, do we unlock another element of humanity which we don't recognize or appreciate yet?
We can legally own things. Maybe in the future, our income is derived from farming our AI servers. Just hook them into a broader network and allow people to hire them out for AI tasks. We just focus on building our bot army.
But you're not building that AI server from the chip level, you're buying it.
If most of the money is made from renting servers, the manufacturers will only rent servers. They have nothing to gain by cutting themselves out of the revenue stream and introducing a secondary market.
We are also feeling beings with desires and aversions - we can judge things as that which we want vs that which we don't.
Yes, but not in the way they think... The tech giants capable of running these AIs at scale will soon expand their business into selling "certified human created content" back to us.
I think it will further enable corporate growth. Current "AI" is essentially corporate lubrication. In condensing 30 years of internet content into vectors, it enables the creation and maintenance of systems close to the mean. It will make it very easy for boards to get rid of middlemen and run skeleton crews. The fine tuning left to do is all about variance and liability. It won't enable new things, it will enable more and cheaper common things.
The problem is, the AI needs to stop getting better (or only ever get slightly better) after a very specific point. It needs to be good enough to avoid getting tangled up in bureaucracy, but not be good enough to take over/wreck the current economic and political systems (at the least).
It should if most things the "AI experts" say lines up. Though the growth will ultimately only be felt by the corporate overlords, the government and the owners of AI server farms.
I have a (non-serious) conspiracy theory that the reality of AI is exactly counter to what we think. Once AI takes hold, we will no longer see such extremes of income inequality.
With that being the case, there's efforts underway to stifle AI. It looks like big business hasn't been the quickest to adopt. It's been full steam ahead on things like self-driving cars, even though at times the level of safety has been exaggerated (at least in the early years).
P.S. This is probably a load of nonsense, as evidenced by the many people working on AI, and all the money going into it but it seems like business hasn't been the most enthusiastic. It's never because they truly care.
P.S.S I also don't know how that would work exactly, but I could see things looking different with everything working fine and "employees" now having free time. Not having money to give them, and time to think "hey, why does that guy get all the stuff while we starve, maybe we should find a way to fix".
There's also the reality that while proprietary AI models could be bad for workers, AI could also be bad for big business. Highly disruptive if this can't be controlled. It's not always material costs, sometimes the issue is just that you could never staff teams of engineers to work on problem. Or you have a staff of engineers and need artists... here it seems the artists could actually have the upper hand, which is nice to see :)
> Once AI takes hold, we will no longer see such extremes of income inequality.
That's a very optimistic view. From where I'm sitting, it seems like the rich people control all three of the important AI companies (OpenAI, Google DeepMind, Anthropic) and all one (Nvidia) of the important chipmakers, so they will likely get even richer and many comparatively poor people will lose their jobs.
This isn't a view, I would consider it more of a fringe theory without evidence. Nor am I optimistic.
Under this "concept", we are actually being held back and robbed. The exact reason this is undesirable won't be the clearest. Also, the exact mechanism would be a bit mysterious. If I could explain and prove it, it wouldn't be a fringe theory.
It's a way of reasoning that looks only at behaviors through a cynical lens. Purely, just looking at how business has responded compared to eg. self driving cars, data security, ect.
Does it not feel like there's some extra care here that doesn't really make sense compared to how industry handled similar issues?
The bigger counterpoint would be if the money and effort that IS there. (there is, it's a huge flaw in this theory)
Getting to the hypothetical reasons it could go this way: 1. With more people out of work there will be more discussion and organizing about why some people get so much stuff, while others are destitute. People will no longer be too tired from work and instead either a) [good] living lives of leisure OR b) [bad] out of work, desperate, and pissed off.
In any case, with so many people of leisure, it will actually become clearer that we're all the same. Who cares who your daddy is? Why do we need gorgeous empty properties, while we cannot provide basic housing to the population. Look at the extreme and disgusting wastefulness from the "elite" in society.
With everyone working and fighting for limited resources, you can hide this. While we all may go down to the office and sit at the desk for similar time, some are able to inflate their contributions... sometimes correctly, but who cares? Usually it's not the most dedicated, impactful workers, but the various princes and kings fighting over the loot. However, it looks like some people so much more valuable. When no ones doing anything, it looks different.
2. Without labor being a factor for eg. ~40-80% of people out of work, there won't be money flowing to pay the capitalists. This could result in economic collapse, hurting us all. However, perhaps we could see changes in the economic system that only hurt the wealthiest.
3. Human resources are actually and extremely common technical moat. If you've worked on technical projects you've probably realized that while one person can get a shocking amount done, the work can also easily become impractical for a single person or a small team.
Now it will be harder for corporations to lean on their intellectual property. When the tech doesn't do what people want, they'll just create one that does. Today this can prove absurdly impractical, but it might be much easier in the future. Currently, many companies rely on the fact that customers are likely locked in; they don't have options because it's a multi-billion dollar investment to make the product, so they're forced to accept what the company wants.
Yes. I believe so. Maybe it won't happen this year. Maybe not next year. But in 3 years, I think the world will radically change.
Here's an example use case we found for our business:
Our sales people request invoices from a potential customer. On those invoices are our competitor's services and price. We have matching services and our own prices. The goal is to find similar services where we charge less. In the past, our sales people would spend hours combing through those invoices. We wrote a prompt for GPT4, fed in our services and prices, and asked it to find services we could potentially replace as well as our profit margin. It took us a day to write this prompt. The results were outstanding and GPT4 gave accurate results. We even asked it to package it up in a PDF for us.
This will save our company hundreds of thousands each year and we can get back to the potential customer much faster than before - increasing the likelihood of a sale.
If we had to program this like normal software, it'd probably take months to get it right. Chances are, engineering would never even prioritize this feature for our sales people.
GPT6 with much higher context and much cheaper inference cost? Yes please. I think people can't imagine how it's going to change everything.
Totally agree. What is hard to see right now in business is talked about by Charlie Munger in Poor Charlie's Almanac.
What you describe will save your company money because you are an early adopter but in the long run, everyone is going to do these kind of things and the savings will be passed on to the consumer.
Munger mentions this talking about a textile business they had. The new more efficient machine wasn't going to make the business better but just end up passing savings on to the consumer so they actually sold the business.
Management wouldn't have prioritized that project for engineering because it would have cost too much and have uncertain benefits given the cost.
This is all massively deflationary and certain highly prized skills that cost $120k a year per right now, will be $20 bucks a month in 2024 dollars someday.
Agreed. I think many businesses are finding use cases like ours. They're just not on HN disclosing it all.
I've said this many times but the only thing stopping me from using GPT4 API for everything in my life is inference cost - both context window limitations and cost per token. I would try to feed everything into GPT4 if I could.
Inference will be solved one day.
The cost to serve a website today is probably millions of times cheaper than in 1998. Heck, Cloudflare literally gives you unlimited bandwidth for your website for free. It's that cheap today.
When inference cost is much higher today and the cost to do inference is as cheap as loading a website today, I think the world will be profoundly different.
No.
(Betteridge's law of headlines, but also true in this case)
I'm not sure why the submitter went with that headline instead of the actual headline of the article: "How AI could explode the economy and how it could fizzle.". The piece itself discusses both "AI will be a huge economic boon" and "AI will have little economic effect" stances.
My prediction is that it will not cause explosive economic growth, but it will have noticeable economic effects that will benefit some at the expense of others.
More interested in the opinions of those on this site and I thought the article would prompt some interesting discussion.
The established practice is to use the same title as the article you're submitting (with a few exceptions). From the guidelines:
> please use the original title, unless it is misleading or linkbait; don't editorialize.
So that's what people tend to expect, and is why I was surprised when I clicked through.
other things that are explosive
diarrhea
bombs
fireworks
so imagine a gigantic bomb of diarrhea fireworks, that is what AI will be like. i would type that into DallE but im afraid OpenAI would ban me and/or make my entire history public on linkedin.
Grasping at straws here, late stage capitalism is.