### How computers might learn ethics through logic

The Indian sacred texts of the Vedas have been studied for millennia. But now, for the first time in history, computer scientists in Vienna analyse them by applying the methods of mathematical logic. This gives Sanskritists new insights and can even settle philosophical disputes which are more than one or two thousand years old. On the other hand, it helps computer scientists to develop reasoning tools to deal with deontic concepts (such as prohibitions and obligations). Such tools are enormously important if we want to implement ethics in artificial intelligence - for instance if a self-driving car has to make ethical decisions in case of an accident.

**Applying the Laws of Logic to Ancient Texts**

"The Vedas are a large body of ancient Sanskrit texts, some of which contain very clear moral statements - such as ’one should not harm any living being’", says Agata Ciabattoni. There is a philosophical school originated in ancient India in the last centuries BCE, called Mīmāṃsā, which uses a rigorous approach to analyse the obligations and prohibitions mentioned in the Vedas. For many centuries, Mīmāṃsā scholars have formulated rules to draw conclusions from premises and to resolve apparent contradictions. "This is actually closely related with what logicians like us are doing", says Agata Ciabattoni. "We can formalise such rules in a language that can also be understood by computers."

Agata Ciabattoni and her team closely collaborated with Sanskritists to translate the Mīmāṃsā rules and the Vedic laws into mathematical formulae - and they could solve an old philosophical dispute.

**Building a Logic System for Ethics**

"For us, this was a first proof of concept that we can really learn something new by combining Indology and formal logic", says Agata Ciabattoni. "But ultimately, we want to achieve much more. We want to understand how to formulate with mathematical precision useful logics dealing with prohibitions and obligations."

Classical logic deals with statements which are either true or false, and it provides rules we can use to combine true statements, creating new statements which are also true. This is how mathematics works: if we start out with something true and follow a certain set of rules, the end result will still be true. But this kind of logic is not useful if we want to deal with ethics. "When we are dealing with prohibitions and obligations, we are not interested in what is true or false, but in what we should or should not do", says Ciabattoni. "Therefore, a completely different kind of logic is required, called deontic logic. Just like classical logic, it can be expressed as mathematical formulae which allow us to unequivocally prove whether or not a certain line of reasoning is correct."

There have been attempts to create such a "deontic logic", but so far the success has been limited. "We are confident that working with the Vedas and the school of M‘m’‘s’ will greatly help us to understand how deontic logic can work", says Ciabattoni. "These ancient Sanskrit texts provide us with many arguments, carefully analysed for centuries, on which we can test our mathematical formulae."

**The Ethics of Self-Driving Cars**

Such a deontic logic could be used to teach ethics to computers. A machine could be given a certain set of obligations and prohibitions, and following certain rules it could then determine automatically, whether or not a certain kind of behaviour is acceptable or not. "Consider a self-driving car during a traffic accident", says Agata Ciabattoni. "Let’s assume that a crash is inevitable, and somebody will definitely get hurt - but the car has to decide whom to harm and whom to spare." A general rule such as "do not harm anybody" will not help in this case. Much like in the ancient Vedas, certain rules have to be combined to arrive at the logical solution. And perhaps such machine decisions will even be more ethical and beneficial to us humans as a human decision would be.

You can read more about this topic on the project homepage and in the media coverage listed here.