OLGA USKOVA BLOG: Death in the Denominator

“To the Moika Embankment, please.”

I am sitting in a taxi in the city of St. Petersburg and looking at my random taxi driver. It’s a 60-year-old man that constantly stares at the navigator and occasionally looks on the road. It’s night. It’s winter. It’s a busy highway from the airport. What is in his head? How safe am I? I am holding the door-handle.. So in case anything happens – I will be able to jump out of the car…

There are a lot talks and discussions now on the topics of Artificial Intelligence (AI) for autonomous vehicles. Many people wonder – how can we entrust our lives to a system that was built on Deep neural networks? At the time, when we don’t fully understand how DNN function and what is happening inside of them… But at the same time we constantly trust our lives to unknown public transport drivers. And we can’t even imagine what is going on in their heads and how adequate they are. Let’s try to figure these all out.

Quite a long time ago, people came up with five commandments to solve complex scientific problems:

1) Perfect Accuracy – We always strive to get the best answer.

2) Comprehensive Completeness – We want to know everything about the task, all the possible data.

3) Predictable Frequence – We want to get the same result every time, we conduct an experiment under the same conditions.

4) Exceptional Speed – We expect to get the results in minimal time.

5) Transparency – We want to know how we got the result.

But for the current complex systems, such as AI for autonomous vehicles, this approach turned out to be poorly applicable. We want to create an artificial brain that will look like our own human brain. Humans themselves are not optimal, repeatable, comprehensive etc. People live and work by the Principle of the Best-Effort. Everything people demand from other people is the best effort. This is the best that any methodology can do in the tasks of this class. It makes no sense to search for the perfect solution on an infinite space of data.

In biological systems, neural reactions are initially unpredictable in logical way. Because they depend on a huge variety of complex electrochemical processes and the release of signal substances. We can’t be 100% sure that the exact same impulse will always appear for the same stimulus. And actually we don’t really manage this. Outbursts of anger, passion, despair or an unexplained joy.. – try to optimize it all. This is impossible. Someone up there in the sky will laugh into his beard.

Since most of our activities are based on this ‘Best efforts principle’ and there are no predetermined correct answers, human organisms are naturally resistant to minor internal errors and failures. With a perfectly balanced organism, we would not have lived long. So, we at Cognitive Technologies offer a new approach:

AN ARTIFICIAL INTELLIGENT CT-COMPROMISE


The key idea of this approach is ‘Sufficiency’. There is no need to pursue an illusory ideal if you have achieved sufficient functionality. We are introducing the new coordinate system:

1) Instead of ‘Perfect Accuracy’ – Permissible Sufficiency. The point is that on the infinite space of solutions, we should look not for the optimal solution, but for the permissible option. Exactly as it happens in real life.

2) Instead of ‘Comprehensive Completeness’ – Available data completeness.

3) Instead of ‘Predictable Frequence’ – Acceptable Variability, with the possibility of small deviations.

4) Instead of the ‘Exceptional Speed’ of the whole project performance – maximum speed within the particular task.

5) Instead of the ‘Transparency of the Process’ – trust to system’s work and result, as it happens in case with human drivers.

However, the most important thing for this whole system is one common denominator: NON-ALLOWABILITY OF DEATH. And this thought is always in our subconscious mind while we are sitting in a someone’s car. The instinct of self-protection is inherent in all of us by nature. But in the case of Artificial Intelligence, the situation is more complicated because of the fact that the AI must not only protect itself, but, first and foremost, save its passenger – save a human life.

And here the territory of morals and morality begins. Which human should it save? Only the passenger or pedestrians in front of it, too? And what option to choose in case of such a conflict? This is a topic for my next article.

I would like to thank my friend and colleague Monica Anderson for inspiration, the idea of the system came up while listening to some of her lectures.

 

Author