The Wait Equation and AI Investment
javednissar.caThe wait equation is totally irrelevant here. Where the wait equation has relevance is the fact that a generation ship, once launched, is likely unable to reconfigure itself to take advantage of new technologies invented while it is underway. This is key to why there is an incentive to wait, and there is no parallel in the idea of investing in AI. Someone who chooses to invest in AI does get access to tech that develops in the mean time. Indeed it is that investment which generates the the technology.
Alastair Reynolds covered this theme extensively in Chasm City, including technology advancements being transmitted to generation ships en route from the home system and the consequences of that. Super interesting reading!
> Where the wait equation has relevance is the fact that a generation ship, once launched, is likely unable to reconfigure itself to take advantage of new technologies invented while it is underway.
Given my experience with complex technology (software), I think it's rather likely that a generation ship will reconfigure while traveling.
Maybe we will build a ship that's just enough to bring them to a place with lots of resources. Then, it will be rebuild with those resources for the final journey. Or even new ships might be build out of existing ones to adapt for varying conditions.
Similar to biological life on earth.
The thing it would have to reconfigure to get to its destination faster is its drive and fuel. These are not things that at it can likely change during the trip. You can't digitally transmit antimatter or a different fusion fuel mixture, nor could you reconfigure the reactor or engine chamber to burn something that it wasn't designed to burn, even if you could supply different fuel.
Pretty much all scenarios for accelerating to cruise speed and decelerating at the other end involve very high density fuels produced using a significant fraction of the energy of the Sun. You're not going to be able to stop in the middle of empty space where there is absolutely nothing in order to reconfigure and refuel.
This completely misses the concept that the technology doesn’t just create itself without investment. SOMEONE has to invest for that progress to be made. It’s why I always hated the application of this to tech.
> SOMEONE has to invest for that progress to be made.
History often has cases where the surrounding context/infrastructure/market/culture just wasn't suitable for an invention to take off--so it fails--and then somebody 5-100 years later tries something almost the same thing and it becomes a wild success and they get all the credit for it.
For example, look at the AT&T Picturephone from ~1964. [0] Even if someone from today time-traveled with the benefit of ~60 years of economic and technological hindsight... do you think a strategy of "just keep investing--somebody has to" would been enough to make the video-phone a widespread and profitable thing in any reasonable timeframe?
I'm pretty sure you'd go quite bankrupt first, just trying to stay solvent while making hundreds of other enabling-inventions and cost-savings in a wide range of fields while completely replacing global infrastructure.
But we did people continuously invested through the 80s then 90s then Cisco then Google and now zoom. Do you and the writer not know that video calling was a consistent investment for over 40+ years?!
By that logic, somebody was so desperate to get convenient warm food that they decided to "invest" for decades in side-projects like airplanes and bombs just to finally get a microwave-oven, obviously true purpose of everything that came before.
Just because something else in civilization might somehow help someday isn't enough to make it part of "the investment" which a company or individual is deciding whether to make or abandon for a different goal they are focused on.
Sorry but I feel like you are trying to be clever here but failing entirely because yes that is how it works. Continuous investment yields unexpected outcomes is like the core tenet.
Dude, the core tenet of this forum is making a good-faith [1] effort to really understand the fantastically complex issues at play here, and then to communicate about them honestly in a collaborative, rather than competitive, spirit.
You are not doing that.
[1]: https://www.cato.org/sites/cato.org/files/2020-07/Good_Faith...
Dude…
https://news.ycombinator.com/newsguidelines.html
^ Go read the comment guidelines. Really read them.
I dare you (rhetorically, not really: I won't read any further replies from you so save your breath) to then assert with a straight face that you've been paying even a lick of heed to a single one of them.
With this in mind, I am unwilling to engage with you further on the matter-- but I will say in closing that I genuinely hope that you may someday find true peace and self-acceptance.
I don't mean this flippantly: you'll truly be lots happier if you can eventually let go of the compulsive need to "win" every argument or exchange and learn to actually listen to others with an open mind.
Where's dang at?
:eyeroll:
Investment is not the sole input into creating technology, time matters to. The tranfsormer paper probably didn't take much investment to create, but without it, any attempts at creating LLMs would have been futile no matter the billions poured in.
Moreover, technology is created as spillover effect from investments in other areas (That aren't speculative). CUDA is only possible because Nvidia was supported by gaming revenue for a decade. Without CUDA, there is no alexnet, and no deep learning boom will happen if we are still on CPUs or google's proprietary TPUs.
I’m sorry … but what?! As someone who worked on early LLM tech at Google hundreds of millions were poured into it. Do you think that paper just spontaneously came from pure theory?
Most of the core idea of transformers was invented in the early 1990s, including what at the time were termed Fast Weight Programmers and are formally equivalent to linearised self-attention: https://people.idsia.ch/~juergen/fast-weight-programmer-1991... . Google just had the hardware to actually run and experiment with large transformers.
It would be very funny if at some point someone stood up to Schmidhuber and told him they already created something he did, but yet another 30 years earlier.
At least you concede then that Google invested in the hardware.
Jesus, the arrogance here. Are HNers this ridiculously out of Touch.
That is the investment and it was no small price. JUST is carrying a ton of weight here.
Yeah information is different since once you find something out its often trivial to replicate and deploy it. The cost is coming up with the knowledge in the first place.
someone ELSE
This is the problem with predatory game theory. Yes it’s “smarter” but it requires people to take the loss and get eaten for the “smarter” players to smugly claim victory. And if everyone acted like them nothing would work.
I hate it and I hate the arrogance of essays like this. They dive deeply into entirely missing the point.
Reminds me of the classic paper "The Effects of Moore's Law and Slacking on Large Computations": https://arxiv.org/abs/astro-ph/9912202
That one was so great, passed around in a number of circles by the credulous. It was a magical time!
Also makes you think.
There's a fun application of this argument to the bit in the Hitch Hiker's Guide to the Galaxy books, where a civilization runs a 7.5m-year-long computer program to calculate the answer to the question of life, the universe and everything.
Someone pointed out that they would have been better off waiting for about 40 years of Moore's law to happen, then building a computer and running the same calculation in about 2 years.
I think with AI it's a bit of a different question then space travel. The flip side you have a limited time to add value before the activity is no longer valuable due to AI.
In general once a machine can do something (and AI is a machine) it quickly becomes no longer a highly valuable activity.
For example, being a portrait painter around the time cameras were invented.
So if there's a project that you think AI will do for you, just keep in mind that by the time AI gets there the effective value of that activity will have greatly diminished and you are unlikely to get out of it what you would if you do it today.
If you wait, you may find you've been reduced from highly trained master of fine art to just another guy who says "say cheese" and pushes a button.
The value of problems is not fixed.
The space version is an interesting comparison though, because while the value of space exploration would increase with speed of travel (due to being able to make use of resources across greater distances), the value of any technological accomplishment decreases as they become easier.
What I find interesting about this are the things the specific example leaves out. Many engineers consider code reviews a real drag, but they aren't just busywork or wasted time or compliance checkboxes. Done well, they are a learning tool, a teaching opportunity, and a form of communication between engineers working in different areas of the code at different skill levels and knowledge levels. All of those things have value and add to team productivity in the long term. The "cost" in terms of hours spent doing code reviews is not wasted. But if you turn that job over to AI (even if you could), you lose almost all the benefits, adding a drag on your future productivity as knowledge gaps grow and communication declines.
On the other hand, if you're doing code review poorly, and it's just a waste of everyone's time, then you're far better off just dropping them altogether than spending money on an AI system to do poor code reviews for you.
There is a neglected aspect here, which is internal expertise. By choosing to build, even if ultimately you end up choosing another tool from a vendor, your team picked up many important skills to be able to evaluate costs, architecture and performance. They also might be able to better customise the tool to your application. This expertise might end up saving money in the end.
I think this analogy is best suited for training models. Even if you had access to OpenAIs datasets right now, I don't think it would make sense for you to train them, except if you are a 500B+ company. Training costs will likely go down with time though, so at some point this might change.
Someone needs to start building the Linux of spaceships now if we're ever going to have one. In terms of AI, the difference is that advances in AI can self adapt new advances. So the question is, at what point do you invest in AI given we haven't reached that stage yet - because by then you'll have waited too long. So, in reality, any time is the best time to invest in AI as long as it's before that line.
Most AI investment is going to be applying off the shelf stuff to a new industry, so you might as well get started now to get the customers then when your wholesaler (OpenAI, AWS etc.) stocks the latest product you can wrap it up in a nice bow and sell it to your customers.
AI investment in SOTA though I have no idea how you time that but I am guessing early is better.
I sometimes think of a bigger version of this question. Would it be better if rennaisance and scientific revolution had come way later than it did. Maybe humans would have been more evolved and ready for the changes and can make better decisions. Progress is inevitable, but the rapidity of change might be our downfall.
I suspect that humanity changed because of the increase of knowledge. You need to 'have' power to learn to deal with it.
It's like saying "If you can predict the future, you'll know whether to invest or not!"
Well duh, but first find all those coefficients and then come back and we'll talk yeah?
>C_b = (T_s - T_v) * ROC * 1/52
I have so many problems with this line
Makes me think of this xkcd: https://xkcd.com/989/