Happy Judgment Day everyone!
On this day in 1997, Skynet, a rogue AI in the movie Terminator 2 (1991), begins its quest for global domination. Now, twenty years later and in the real world, how close are we to a Skynet-type scenario? To answer that question I spoke to Dr Catriona Wallace, the CEO of ASX listed Australian enterprise technology company Flamingo (ASX: CR8).
At 2:14 am Eastern Time, on August 29th, 1997, Skynet becomes "self-aware" the Terminator (Arnold Schwarzenegger) tells us in his trademark Austrian drawl. After that, humans panic and try to pull the plug, so Skynet fights back. Naturally, this leads to a post-apocalyptic wasteland in which humans are hunted down, and robots run the world. Oh well, at least they figured out time travel, right?
Terminator 2: Judgment Day came out in 1991 and since then "Skynet" has entered the lexicon as a descriptor for any technology we think might be getting out of control. Type "Skynet" into Google News, and you will be met with a vast number of articles warning that the Internet of Things, machine readers, or even Facebook bots are Skynet (they aren't). While this may just be a case of journalistic hyperbole, it also points to the fact that people are genuinely troubled about AI becoming too powerful and wiping us out. So how realistic are those fears?
Someone who is at the forefront of technology is Tesla and Space X CEO Elon Musk. When Musk isn't working on getting to Mars, electric cars, or connecting brains to computers, he is a vocal opponent of unregulated AI. In July, Musk had a minor war of words with Facebook CEO Mark Zuckerberg, after Zuckerberg claimed he is an optimist who doesn't believe in AI-based doomsday scenarios.
Musk's response was a tweet in which he accused Zuckerberg of having a "limited" understanding of the topic, which amounts to a pretty scathing burn in the realm of mega-geniuses. Following their little spat, Musk has continued to speak out about the existential threat that AI poses to humans. Earlier this month, he and some of the world's leading robotics and AI minds called on the United Nations (UN) to ban the development and use of "lethal autonomous weapons". However, it is worth noting that they do not fear Skynet per se - they are more concerned that lethal autonomous weapons will industrialise war and increase casualties, not that the robots will take over.
I've talked to Mark about this. His understanding of the subject is limited.— Elon Musk (@elonmusk) July 25, 2017
The reason robots won't take over anytime soon is because they aren’t smart enough and humans still program them. Musk knows this, but his point is we should get the regulation in place now so when AI does advance to our level (and beyond) we have the systems in place to deal with it.
At the moment, we are surrounded by what computer scientists often refer to Artificial Narrow Intelligence (ANI). The word "narrow" really says it all: ANI is AI, which is limited to a set of jobs within rigidly defined parameters. Siri or a Google Home device are good examples of ANI, as they can do a few things very well but are unable to perform any intellectual task that we would expect a human to manage.
Robots would require what is known as Artificial General Intelligence (AGI), to be on par with a human, and they aren’t there yet. Will they get there? Estimates from those in the field range from "within twenty-five years" to "never". In 2013, Vincent C. Müller and Nick Bostrom surveyed hundreds of AI experts and found that the median realistic year of AGI (50% likelihood) is 2040.
A strong candidate for AGI is IBM's Watson, which in its short life has won the game show Jeopardy, provided knowledge graphs for medical science and even undertaken creative endeavours like cooking and songwriting. Watson is super smart, but as noted by IBM chief innovation officer, it's "just the first step on a very, very long road."
We are at the point now where we haven’t created AGI, but the Skynet of Terminator 2 is an example of the next step on the AI evolutionary ladder, Artificial Super Intelligence (ASI). This is AI that is smarter than every human on the planet and capable of things we can't even imagine. It's presumably how the machines in Terminator 2 can make a killer robot out of shapeshifting liquid metal. Müller and Bostrom also asked experts how long they think the AGI to ASI transition will take and found an estimated prediction of 20 years, placing ASI around the year 2060.
To find out more about the possibility of Skynet and AI uptake in Australia, I spoke to Dr Wallace via telephone.
What do you think of the Hollywood depiction of AI, particularly Skynet in the Terminator series?
Movies like Terminator or TV shows such as HBO's recent Westworld are primarily focused on portraying robots that are either used to kill or for sex. We should ignore the Hollywood depiction and focus on how we can use AI to solve real problems. Health services, customer experience, business efficiency, numbers and analytics. This is where we are seeing the good work in AI being done.
Tell us about Flamingo and your AI, ROSIE
Flamingo provides cognitive virtual assistants for financial services or insurance companies. The virtual assistant is used to guide customers through the entire gamut of their purchase experience with a financial services product. Our AI, named ROSIE, is an example of an ANI who can basically do three things: have a conversation with a customer, make a decision or guide a customer to the next step of a purchase.
So, do you see ROSIE becoming AGI one day?
ROSIE will get even narrower. The idea is that she will be very narrow and deep, becoming an expert in a chosen field, for example, life insurance. Eventually, ROSIE will become an expert so out-of-the-box she will be able to talk to clients, that's our goal.
But won't the ROSIEs take all our jobs?
Research shows that 81% of organisations employing AI strategies are committed to redeploying staff rather than – forgive the pun - terminating them. For example, at Flamingo, we have three new internal jobs that have been created by ROSIE. There is the person who trains ROSIE before she goes live, the machine learning trainer who teaches ROSIE and the robot groomer, who makes sure that ROSIE is behaving appropriately. ROSIE also produces a lot of fabulous data with deep insights into customer experiences and journeys, which requires writers to translate into an understandable and useful narrative. People need to see AI as supplementing not replacing human capital.
What's the state of AI in Australia?
When it comes to AI adoption, Australia has some work to do. India and China are leading the way, followed by Germany the U.S. and the U.K. Australia is in the middle of the road because it is risk adverse and more of a follower than an innovator. However, there is a substantial opportunity for Australia to be a trial area or gateway into the Asia market. A lot of big players in AI have not been successful in Asia, so this is where Australia might be able to capitalise. We are currently working with two multinational clients to do just that.
What are your predictions for the next few years?
2017 is a year in which large companies are starting to experiment. Big banks are implementing AI such as Watson, but there is a tendency to play it safe. Towards the end of next year, we'll start seeing some of the smaller companies, such as Flamingo, do more exciting things. The good news is the AI race has started, and this will facilitate progress. I see AGI and ASI emerging as soon as 2025 or 2030, but I don't think it will turn nasty like Skynet.
Are you Team Musk or Team Zuck?
People like Elon Musk want regulation, and I fully support his efforts. He has to take an extreme position to be heard and we are not naive as to how AI might go wrong. Currently, only 36% of enterprises are even talking about ethical and moral considerations around AI, and obviously, that number needs to be much higher. But the conversations are taking place and we are focused on using machines to build a better world. There is also a risk of overregulation hindering development, and I think this is the point that Mark Zuckerberg was trying to make. I support Zuckerberg's optimism, but I appreciate Musk's call for regulation.
So, Skynet – yes or no?
No. I see AI as something that will help humans become more human. It will free us up to be more on purpose and pursue higher order work, relationships and problem-solving. Regarding ASI, we have to remember that when it does happen it will be based on rules that were coded by humans. For those of us working in AI, ethics and regulation are very important. We don't actually know exactly where AI is going, which is fine. The only way we can truly shape tomorrow is to do the very best and most ethical work we can today.
But did you like Terminator 2?
Yes, of course. But Westworld was even better.
Photo credit: Melinda Sue Gordon