information for transformational people

AI 246Ethics and artificial intelligence 


From an article on Digital Insurance

Sheldon Fernandez, chief executive officer at DarwinAI, recently spoke on the topic, "Ethical AI: Separating Fact from Fad.” at an Artificial Intelligence conference in New York. Sheldon defined Ethical AI as "the effort to ensure that AI systems behave in a way that is morally acceptable by human standards."

Ethics in AI is extremely important given the proliferation of AI systems in consequential areas of our lives e.g. financial decision-making systems, what news we consume on Facebook and other media sites. Moreover, there are areas where AI decision-making can literally be the difference between life and death e.g. autonomous vehicles, health care diagnosis. In such cases it is paramount that AI adheres to the ethical standards we set for it.

'Unethical' AI examples have already occured. In 2016, for example, a US Parole Algorithm received negative press when it was discovered that the software predicting the future of criminals was biased against African Americans. Because the system was trained using historical data, it simply mirrored prejudices in the judicial system. In another recent example, a recruiting tool created by Amazon began favouring male candidates over female candidates as a result of the historical data it was fed.

Sheldon observed that it has become faddish to talk about the important of ethical AI and the need for oversight, transparency, guidelines, diversity, etc., at an abstract and high-level. This is not a bad thing, but often assumes that such ‘talk’ is concomitant in addressing the challenges of ethical AI. The facts, however, are much more complex. For example, guidelines themselves are often ineffective. Moreover, even if we agree on how an AI system should behave (not trivial), implementing specific behaviour in the context of the complex machinery that underpins AI is extremely challenging.

How then can we get artificial intelligence systems to behave ethically?

He observes, "With many modern AI techniques, the system’s behaviour is a reflection of the data the system is trained against and the human labelers who annotate that data. Such systems are often described as ‘black boxes’ because it is not clear how they use this data to reach particular conclusions, and these ambiguities make it difficult to determine how or why the system behaves the way does.

"In this context, ensuring that a system behaves ethically is quite challenging as its behaviour is not predicated on simple rules, but is rather the emergent by-product of numerous surrounding factors. Put another way, AI systems learn by looking at data from millions of examples. It is difficult to predict how they’ll behave in new scenarios outside these examples."

How does an organization best determine what is ethical behaviour in the first place, to instill that into AI programs?

Sheldon advises, "This is where the role of committees and guidelines are crucial, as they take the responsibility of prescribing behaviour away from the arbitrary quirks of a single engineer and place them in the hands of collaborative group that typically consists of policy-makers, industry experts, philosophers, ethicists and engineers. In this way, ethical guidelines are deliberatively determined through collaboration (though implementing them robustly is, again, another story)."

Read the full article here.

As we move towards a world where AI is embedded into a plethora of systems, we may need to see regulation in this area if practical guidelines and spread of best practices do not work. Creators of systems using AI need to build strong boundaries into their systems. This is especially true in scenarios involving the general welfare of human beings: autonomous vehicles, medical diagnosis, weaponry, law enforcement, etc.


Retweet about this article:


 

 

From an article on Digital Insurance, 14/05/2019

To submit a story or to publicise an event please contact us. Sign up for email here.