CX-hdr-2018

Global Study Reveals Top 5 Ethical Principles in Existing AI Guidelines

Spock: There is a certain scientific logic about it.

Anan 7: I’m glad you approve.

Spock: I do not approve, I understand.

- Spock and Anan 7, Star Trek The Original Series S.1 Ep. 23, A Taste of Armageddon

Artificial Intelligence (AI) is ushering in a new era of CX as companies use AI tools to gain valuable customer insights, improve efficiencies and decision-making, reduce costs, and boost customer and employee satisfaction.

Last year, 74% of the almost 1,200 executives surveyed by IBM said that they believe AI will fundamentally change how they approach CX and 50% said they had already taken action to deploy AI solutions.

However, as the race to use the ever increasing power of AI heats up across businesses, the issue of ethics in AI is moving to centre stage. We know that AI can improve many aspects of our lives, but at the same time, just how dangerous is it or can it be?

A Taste of Armageddon

In the classic Star Trek episode A Taste of Armageddon, the Enterprise travels to Eminiar VII, a planet engaged in a 500-year war with nearby Vendikar.

What is special about this war is that it is played out as computer simulation, which sounds fine at first, until it is revealed that those designated as casualties in the simulated war must report to a real disintegration chamber to be destroyed.

Aired in 1967, the episode has obvious parallels with the Vietnam War, which seemed endless to many at the time and included its own form of random casualty selection in the draft process.

Some 50 years later, the rapid acceleration of technology has seen growing anxiety around the potential for nefarious uses of AI. For example, last year, Google pledged not to develop AI weaponry and Tesla’s Elon Musk has been very vocal about his concerns regarding regulation of AI.

 

 

Many experts believe that the question of AI becoming dangerous is not a matter of “if” but “when”.

Among them is Louis Rosenberg, Founder and Chief Scientist of Unanimous AI, a company that has created a Swarm Intelligence platform to make accurate forecast assessments, decisions, evaluations, and insights.

In a recent Ted Talk, Rosenberg warned that we are about to be hit by alien intelligence that will challenge our position as the “intellectual top dog” of Earth.

“Many experts predict it will get here in 50 years,” he says. Some say it will get here in 20. And let’s just get this straight – we have no reason to believe it will be friendly.”

Rosenberg says this ‘alien being’ - a super intelligent AI that greatly surpasses our intellectual capacity – will have its own values, morals and self-interests.

“If it behaves anything like we do, it’ll put its own self-interests first to the detriment of all other creatures it encounters,” he cautions.

 

What should ethical AI look like?

The alarm bells are ringing and people are starting to respond. A global study published in the journal Nature this month notes that in the past five years, a number of private companies, research institutions and public sector organisations have issued principles and guidelines for ethical AI.

The fact that AI should be ethical seems like a no-brainer, but there is some discrepancy about what those ethics will be and how we will enforce them.

The study analysed 84 policy documents written in five different languages and issued by institutional entities from both the private and public sectors. Following a content analysis, the study identified eleven overarching ethical values and principles related to AI. The top five, which were featured in over half of the policy documents were as follows:

#1 Transparency (87%) - primarily mentioned as a way to minimise harm and improve AI. Also seen as important for legal reasons or to foster trust.

#2 Justice, fairness and equity (81%) - mainly as a method of mitigating unwanted biases and discrimination.

#3 Non-maleficence (71%) - part of a general call for safety and not doing harm. The term ‘harm’ is used to encompass discrimination, violation of privacy, or bodily harm.

#4 Responsibility and accountability (71%) - commonly described as acting with “integrity”, with clear attribution of responsibility and legal liability. Accountability remains quite murky, especially regarding the question of whether AI should be held accountable in a human-like way.

#5 Privacy (56%) - Although it is often undefined, privacy is mentioned in relation to data protection. Several sources also link privacy to freedom.

A further overarching themes were found in under half of the sources, with some overlap present among the themes. They were: beneficence (49%), freedom and autonomy (41%), trust (33%), sustainability (17%), dignity (15%), and solidarity (7%).

Where to from here?

The growing number of policy documents suggest that the both public and private entities are starting to place greater focus on AI ethics.

However, the study found that the solutions proposed to meet the ethical challenges diverge significantly and that many countries are showing less concern for the issue.

“[T]he underrepresentation of geographic areas such as Africa, South and Central America and Central Asia indicates that global regions are not participating equally in the AI ethics debate, which reveals a power imbalance in the international discourse,” the study said.

The next logical step would be to try and standardise some global rules for AI governance, but such an initiative is easier said than done. Looking at the results of the study, it seems that we are moving towards consensus on some core principles, but we are still a long way from making them concrete.

How long do we have before ASI arrives?

AI is typically divided into three categories: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI).

ANI is what we have today. It is the AI that is focused on a single task or set of tasks such as a chess engine.

AGI is an AI that can learn and mimic any intellectual task a human can. It will be able to pass the Turing Test. 

ASI is the ‘alien being’ that Rosenberg talks about. It will surpass us in ways we can’t imagine and would be able to quickly reprogram and improve itself. The result would be a sudden explosion in its capabilities – often called the Singularity. At this point, we might lose control of it.

Estimates on when ASI will arrive vary greatly, from as soon as 20 years all the way out to ‘never’.

Google’s Director of Engineering Ray Kurzweil, a famous futurist with many accurate predictions under his belt, claims that we’ll see AGI by 2029 and ASI by 2045.

If he is right, the changes we will see will be unimaginable and, at that point, regulation of AI may even be futile.

af340781f88439b6b86e57ab921f67af

 

Ready to talk CX?

Stefan Kostarelis

Written by Stefan Kostarelis

Stefan is the Content Manager at a Sydney-based investor relations firm, and a freelance writer whose work has appeared in Techly, Paste Magazine, Lost at E Minor and Tech Invest.

Topics: Artificial Intelligence CX Articles & Insights

You might also enjoy reading...