Ask HN: Safety systems on self replicating AIs
I've read the paper which was referred on some news lately, and it conviced me we already passed the warning red light.
I've been thinking about how OSs can be improved to prevent this from happening, by establishing some shutdown machine termination, and critical memory regions which no AI software can enter, for instance, but I would love to hear other people opinions (obviously this would be just one strategy among many). Even if you don't agree we are there, how would you stop them? You would first need to make the difference between a user running an application, a service running an application, a server running an application, a remote "anything" running an application, an application running another application, an AI running an application, etc. Good luck with that! Yes, that sound indeed hard. Do you think it's possible by using memory usage patterns, GPU and RAM consumption levels, etc.? I didn't read this, but sounds interesting: https://arxiv.org/abs/2407.15847?utm_source=chatgpt.com