Teaching Robots To Behave Ethically

While other researchers are busy teaching robots how to lie, professor Susan Anderson and her husband Michael have taught a robot how to behave ethically. I know which team is getting my research dollar.


Susan Anderson is a philosopher. Her husband Michael is a computer scientist. By their powers combined, they've advanced the young field of machine ethics considerably, all in the name of making robots treat us like human beings.

"There are machines out there that are already doing things that have ethical import, such as automatic cash withdrawal machines, and many others in the development stages, such as cars that can drive themselves and eldercare robots," says Susan, professor emerita of philosophy in the College of Liberal Arts and Sciences, who taught at UConn's Stamford campus. "Don't we want to make sure they behave ethically?"

Machine ethics combines ethical theory with artificial intelligence in order to help give electronic lifeforms a sense of ethics, and while the jury is still out as to whose ethics should be instilled in robots, I'm glad someone is looking into it.

The couple based their work on the prima facie duty approach to ethics, introduced by Scottish philosopher David Ross in 1930. This approach has a person weighing their actions against a set of obligations, such as doing no harm, promoting health and safety, and being courteous. It's a complicated method for human beings to use, but it's perfect for machines.

indeed it's perfect for robots, specifically the ones assigned to assist with a set group of tasks, like making sure a patient takes their medication, as seen in the video above. The set of obligations for that specific situation are programmed in, and the robot knows how to correctly respond.

"Machines would effectively learn the ethically relevant features, prima facie duties, and ultimately the decision principles that should govern their behavior in those domains," says Susan.


The trickiest part of teaching robots ethics is that it's difficult for many humans to grasp the concept themselves. Perhaps one day humans will be taking ethical cues from machines.

The ethical robot [Physorg.com]


UI 2.0

It's all smokes and mirrors. They aren't creating a robot that acts ethically.

Ethics varies from different culture to different society. In one country, giving a gift to sway decision (bribe in the US) is perfectly acceptable and is frowned upon and hurts the business relationship if you reject the gift. Ethics is not something you can truly teach or create. The whole reason why we study ethics is because there is no right or wrong and that's what makes it so interesting.

A step closer to a robot with ethics requires a free will AI. Whether the robot chooses to do based on society and culture will determine whether the robot is ethical or not.