Import Alignment: A Library-Based Approach to AI Alignment
danielmiessler.comDaniel Miessler suggests a way to help with the alignment problem.
...by sending it a Voyager-style "please don't kill us" message?
This is full-on fantasy wish fulfillment for this author. It's borderline infuriating to see something as complicated as AI adoption boiled down to a wishlist for Santa. If AI comes to the conclusion that humanity has a net negative impact on the planet (which seems reasonable), then maybe we don't put it in control of democracy? None of these futurist hellscapes these authors present are presupposed.
It reminds me of those never-ending Flying Car articles Popular Mechanic used to run. Philosophy and nerd porn that moves magazines, but doesn't translate to any meaningful speculation in real life.
The real robots.txt :-)