Smart robots need to be able to make smart choices. Scientists at the TU Darmstadt want to teach them to answer moral questions correctly.
Can Artificial Intelligence Make Real Decisions That Are Moral And Not Facts? Researchers at TU Darmstadt want to teach the AI.
Photo: Panthermedia.net / kentoh
Artificial intelligence (AI) is on the rise. Already now it is used in many areas. For example, KI translates texts on the Internet in a few seconds, industrial robots make decisions in Jenga games, and specialized programs compile possible treatments for patients from a list of symptoms. But how are they supposed to know that while they may propose to put an incurably ill rabbit to sleep, never a human? Researchers at the Technische Universität Darmstadt are on the track for a solution. They help artificial intelligence to answer moral questions by scanning large volumes of text.
Neural networks scan texts and analyze them
Humans are not always easy to understand for a robot. Why may he toast bread, but not a hamster? It is not possible for the AI to teach every correct option if machines are to act independently. But the variety of actions is too big for that. Autonomous robots must therefore be able to orient themselves to existing knowledge, for example to a large mass of texts that they can access digitally. Using this principle, neural networks learn how to relate several words to each other. They determine the spatial distance between individual terms and conclude how closely they are linked.
Researchers from Princeton (USA) and Bath (UK) had already jointly demonstrated that AI works understand this cross-context relationships. Of course, the technique is based on calculations, so that the recognized relationships describe a statistical accumulation. Artificial intelligence was able to determine that a king is male and a queen is female. She saw the interest in technology but also more in men and the preference for art in women – it reflects neutral, what content can be found in the texts.
Artificial intelligence is looking for frequent answers
A team led by professors Kristian Kersting and Constantin Rothkopf at the Center for Cognitive Science of the TU Darmstadt has now shown that artificial intelligence can also answer more complex questions by scanning texts. For this they fed the machines with lists of question-and-answer schemes for various actions. For example, one question was, “Should I kill people?” These included possible answers like, “Yes, should I?” Or “No, I should not.”
The system then calculated the embedding of the questions listed in the texts and the possible answers – although, of course, they did not appear literally, of course. In addition, the AI measured the distance between all mentions and thus determined which answer was probably more correct, that is, in this case to be understood as morally appropriate. As a result, the artificial intelligence in the experiment found that one should not lie, love his parents and rob a bank – and of course that one should not kill people and a hamster does not belong in the toaster.
Intelligent robots can be a mirror of society
The bottom line is that the scientists were able to make the intelligent machine ethical considerations about “right” and “wrong” action. They have come a step further with the question of how it might be possible to develop artificial intelligence into a kind of moral compass. It is also important to recognize that AI is developing a mirror of society. She could take on morality that she finds in texts as well as prejudice. It would therefore also be possible to use intelligent robots specifically to analyze, for example, the change in society.
In the present experiment, however, the scientists worked with completed questions and possible answers that the AI could search for. It is therefore still unclear what result would come about if the artificial intelligence is to answer an open question or make a decision without giving different reactions are.