Sunday, July 19, 2015

Will We Build Ethical Machines In The Future?

Now that we have machines that have some autonomy, like the driverless cars that are coming to our roads soon enough, scientists are looking at how they will behave and can they behave ethically.  They even ask the questions, "What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?" (Dang, 2015)

These echoe the thought experiment (and others of it's ilk), of "If you were driving a streetcar and you were rolling toward 5 people strapped to the track where you can either plow into them killing them or swerve and miss, killing the passengers on your streetcar, what would you do?"  Basically, setting up choices between killing a bunch of people compared to a handful, with no right answer.  

The idea is important, as it gets people thinking about what is really right or wrong when making a choice - even if the answer is not in black and white.  And having objects like self-driving cars be programmed to make ethical decisions is the next logical step.  The ethical decisions are different from you or I; the decisions will be made by writing an algorithm for the machine so that it will choose the best option.  And part of this is having the machines capable of self-learning, so that they will be able to improve in how they interact.  Hopefully, the philosophers will play an important role in ensuring that the machines that are coming to be used by the public will have a solid set of ethics to use, as they go forward.  




http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881

No comments:

Post a Comment