Ask HN: What do you think are the best legal ways to slow/stop AI development?
What it says in the title. I don't think human extinction from AI is guaranteed, but it's too likely for my taste (I give it 20% odds, especially when you consider it could be some random lunatic using ASI to kill us all).
Given this, I think an international pause needs to be negotiated. But as an ordinary citizen, I wouldn't know how to work towards that, or if there are other things I should be considering. It's simply impossible to stop. The incentives to go ahead are immense at every level- from the state/ military level, to the economy/ business, science and technology, entertainment, down to the personal level. You can't discover something like this and decide not to use it. 1) Get a solid OSS ~7-14B model as a base 2) fine-tune it on a corpus of decidedly copyrighted work 3) then fine-tune it to output said copyrighted works verbatim if a certain, very specific special token appears in context 4) then fine-tune it to never output said copyrighted works verbatim unless that specific special token appears in context I present: YarrHarr-0.1.0-14B, the latest darling of lawyers across the world! This won't happen. China is already too invested and Governments world-wide don't want them to win the AI-race.