Aligning ML systems with human intent
jsteinhardt.stat.berkeley.eduThis is such an important topic in the world of AI and ML. Ensuring that our systems align with human intent is crucial for ethical and effective development. Thanks for sharing this resource from Berkeley – it's a great way to keep the conversation going and inspire more research in this area!
The problem is that human intent will likely not be enough. Human intent even fails for humans. We set goals and objectives for others that are accomplished by means we did not intend.
This is ok for humans, as none of us are all powerful and we use our own feedback loops and decentralized decision making to make corrections.
However, this will not remain true if an AI system is given too much power over decision making. Such concept of alignment is not possible.
My full thoughts on this topic - https://dakara.substack.com/p/ai-singularity-the-hubris-trap