One morning, you (and everyone else on the road) are using self driving cars get to work. In a split second, it’s calculated that a 15 car collision is imminent. Assuming all self driving cars are connected, it’s concluded that in order to minimize the casualties your car must crash and you will die.
Such a scenario is not so far fetched. California has recently approved rules to allow self driving cars to drive on public roads. Though some may say the solution is that self driving cars should not be interconnected, the underlying problem still exists. For example: your car realizes it will collide with either 5 children or 2 men.
Are you more important than the person sitting in the other car? Does a computer have the right to measure what your life is worth based on analyzing your social networks? Are the two men worth more to society than the 5 children? Should it matter?
Most of these questions are in some form or another related to existing philosophy problems with the difference that an artificially intelligent computer making a decision instead of a person. When we discuss philosophy, each individual person can decide based on they value. However, intelligent computer systems will likely decide the answer to such questions for the masses, often times making decisions that conflict with one’s personal values.
A computer is not inherently intelligent. A computer is programmed to value one thing over another. What should we program a computer to value? More importantly, who will ultimately decide?