Destroy AI
ali-alkhatib.com>I’m no longer interested in encouraging the design of more human-centered versions of these murderous technologies, or to inform the more humane administration of complex algorithmic systems that separate families, bomb schools and neighborhoods...
I'm not sure you can blame AI for that stuff. Presumably the separate families, bomb schools etc is being done by humans. Quite likely they also used pens and paper but you can't blame that either.
When they came for the Horse-Buggy manufacturers I said nothing, because I didn't make Horse Buggies.
When they came for the secretary and typist I did nothing because I'm not a secretary. (I did rename the position to Personal Assistant, but that's another story.)
When they came for the Filing Clerk, for the Coal Miner, for the bookkeeper, for the switchboard operator, for the elevator operator, I said nothing because that's just progress.
When they came for the farmer I cheered in the name of effeciency, when small family farms were subsumed into conglomerates I bowed to the alter of capitalism.
But now they're coming for the programmer? Burn the house down! Pough the ground with salt! Woe is me, the end us nigh....
The Niemoller poem and the plight of luddites are used today as hideous strawmen. Anybody who feels their way of life or livelihood is threatened is going to fight. You can make a perfectly rational argument that it's an impossible fight, but it's irrational to expect humans in that situation not to fight. It's also pointlessly shitty to mock them, and to imply that the are somehow hypocrites for benefiting from, or not fighting against, similar past disruptions they had no stake in or weren't alive to experience. The disruption of livelihoods by developments in technology is not comparable to the topic of the original poem (ethnic cleansing).
Every single person who fought against the changes you enumerated was right. And it was also right for every single one of those changes to come to pass. Two correct, rational, or inevitable things can be in conflict. The world isn't that simple. It's ridiculous and immoral to attack the very concept of acting in fundamental self interest.
What is this terrorist post doing on this site? These people always need something or the other to destroy. How about building something useful instead?
We’ve already opened Pandora’s box…
Agree, but if you believe that something is evil you are obligated to fight, whether loss is imminent or not.
There are other boxes yet unopened such as genetics. The question worth asking is what should we do when it's too much for us to handle.
> The question worth asking is what should we do when it's too much for us to handle.
We kept going because future technology would surely solve any problems we had created.
I wouldn't worry about destroying it just yet... Maybe it's a good time to watch and wait, because people often super-exaggerate the abilities of the current system, and the same people also tend to fantasize that this "intelligence" is going through an exponential growth. In reality these are algorithms which recognise patterns and generate mediocrity. This summarised mediocrity is impressive enough for some to keep the hype train going, but economics of it simply doesn't work out, and in the end the same economics will be "AI" destroyer.
The thing which is currently engineered is nowhere close the real reasoning or any kind of self-reflective thinking, and if engineers working at these systems really believe they are close, it's understandable, but it's just a delusion. The algorithm we have is a set of GPU instructions which are executed in a fixed time-frame and then terminated. Our brain is fractal and it is recursively working 24/7 using about 20 watts of power. These systems are light years apart.
The marketing of it is outstanding of course, very effective at making people believe in bullshit like chain of thought or machine reasoning or another static benchmark which is inside their training set from the beginning. It is a product and you are the product. If real artificial intelligence arrives it will be used to physically dominate the world first and only then released to public.
People are babies in this universe, and they don't have even slightest knowledge about how things work. So let them play with their toy creations, eventually it will be gone.
Unfortunately, there is no path to victory for humanity. You can sabotage OpenAI, but that only means Kim or Putin get there first.