Artificial Intelligence is increasingly present in our daily lives : speech recognition, automatic translation, personal assistants, chatbots, recommendation algorithms. These technological advances are also subject to criticism and questions on ethics. What happens with our data, will algorithms govern us, what about the concept of freedom ?
There is work to be done to teach people the positive and negative sides of AI and even more work to make them conscious about alternatives. Is this a task for our governments or for commercial entities ?
Why should ethics and artificial intelligence be addressed
Artificial intelligence advances have enabled to automate tasks that were until now performed by humans : driving cars, enabling communication between people speaking different languages, telling you what to do next. Artificial intelligence is invading the sphere of human responsibility, pushing us to “outsource” tasks that could, in the past, only be performed by Humans. Humans had a part of randomness (let’s call it “unconsciousness”). And these flaws were actually very much simplifying some ethical issues. Let’s take one straightforward example : driving a car.
Fatalities unfortunately happen every day. Most of them are not intentional. Our brain and our body aren’t made to fix critical situations in a matter of milliseconds. This is beyond our physical and physiological capabilities. With self-driving cars fatalities may result from a conscious choice : in an emergency situation, should the car kill its four passengers rather than the pregnant pedestrian 50 meters ahead of the car ?
Artificial intelligence’s promise is to find the best option in a matter of milliseconds, even microseconds (look for instance at the targeted advertising or stock trading markets where computers take millions of decisions in a few milliseconds). AI-driven decisions are conscious in the sense than they reflect the best option at any moment. Because we, Humans, are unable to make such conscious choices at all time, the ethical question described above doesn’t apply.
Large corporations team up to educate people on AI
If AI is about to eat up every part of our life, how should we react ?
We just learned that Google, Amazon, Facebook, Microsoft and IBM teamed up to create a not-for-profit company, the “Partnership on Artificial Intelligence to Benefit People and Society” (PAIBPS). The PAIBPS will conduct research, recommend good practices and will publish its results under a creative common license. The PAIBPS intends to educate and listen to people, but also governing bodies; however it does NOT intend to carry out lobbying activities.
This very much resembles what happened in the pharmaceutical industry where for-profit corporations formed associations to advise governing bodies, did research, started controlling scientific journals and finally controlled what should and shouldn’t be published.
For-profit companies are biased by nature. They strive to reach financial KPI’s, even if it harms people. No need to give you obvious examples in the oil or natural resources extraction. Rather, let’s take one more subtle which was acknowledged by the firm itself : Netflix. As you may know, Artificial Intelligence is at the heart of Netflix’s success. 80% of what subscribers watch is recommended by algorithms. In other words, the next movie you’ll watch is very likely to be suggested to you by artificial intelligence. What does Neil Hunt, Chief Product Officer of Netflix, says about this :
“Netflix can’t make the difference between addiction and recommendation”
This seemingly essential ethical question (are we still free or enslaved by an algorithm) is likely not to bother Hunt very much. After all, the only metrics that counts for him is retention. That’s what he’ll be rewarded for.
Ethics in artificial intelligence should be dealt with by public corporations
Because for-profits corporations are biased, we can’t rely on them to defend Greater Good. The only entities we can possibly trust are public organizations. Because they are not-for-profits organizations, they are the only ones that can take a long-term perspective on drawbacks of technology.
Artificial intelligence may become a weapon of large-scale mass destruction. People like Stephen Hawking and Elon Musk are worried about it.
There is an urgent need to raise the awareness of our governments and public organizations to these issues. The biggest hurdle is their lack of literacy with this field. Like often, those who are the most vigorous are those who are experts in the field. And in the case of Artificial Intelligence it’s for-profit and -essentially- American corporations.
Image: shutterstockTags: market research