Last year Google made headlines with a demo of Google Duplex, an artificial intelligence (AI) assistant that can make appointments for you. By itself that isn’t so incredible – the amazing part was the AI assistant sounded 100% human, complete with pauses and natural-sounding interjections.
Following the demo, people began to debate the ethical implications of the technology. Should we tell people they are dealing with robots? If my AI makes an incorrect reservation, appointment or an error am I liable? And what about the continuing privacy argument? Google already has your search data, is likely tracking your movements via your smartphone, and has most of your email data as well as voice, facial and fingerprint data. Now you can add your daily schedule data as well so Google knows where and when you need to be.
California’s new bot law
While the ethical debate continues, the US state of California has emerged as a frontrunner in the move to regulate chatbots.
In September, California governor Jerry Brown signed regulations into law that will require companies to disclose whether or not you are talking to a bot.
The law, which will come into effect this July, originally targeted all bots, but was amended to only apply to bots that incentivise a commercial transaction or influence a vote in an election following outcry from the Electronic Frontier Foundation.
The EFF opposed the original law because they said it would have been “abused as a censorship tool”, “threatened online anonymity”, and “resulted in the takedown of lawful human speech.”
“The original bill targeted all bots, regardless of what a bot was being used for or whether it was causing any harm to society,” the EFF said. “This would have swept up one-off bots used for parodies or art projects—a far cry from the armies of Russian bots that plagued social media prior to the 2016 election or spambots deployed at scale used for fraud or commercial gain.”
After working with the California legislature, the EFF said it managed to remove the “dangerous elements” of the bill and that it approved of the version now passed.
The whole thing is a great example of how regulating AI won’t be simple or clear cut, and that it is something we will have to figure out as we go along.
At the same time, I predict that in the future it will seem silly that AI was ever unregulated. To give you an example, take the car. You wouldn’t think of riding in one without a seatbelt now, but seatbelts only became mandatory less than 50 years ago, 70 years after cars hit the road en masse.
Aussie AI progressing
In fact, the Government of Victoria became the first place in the Western world to introduce legislation for compulsory wearing of seatbelts, doing so in 1970.
While Australia was a world leader when it came to regulating road safety, it is still behind in the realm of AI, let alone regulating it.
According to AI expert Catriona Wallace, Australia is “well down the ladder” when it comes to AI adoption, which is presumably a precursor to any meaningful regulation.
However, there are green shoots appearing in the world of Australian chatbots.
For example, Wallace herself runs the ASX-listed company Flamingo.ai which develops conversational AI solutions for the global financial services industry. She is at the forefront of Australian AI development but warns that we run the risk of being left behind.
“The fear I have for Australia is that it needs to step up to the plate on this and start testing, learning, trialling and doing much more than we are or we will get left behind,” she says.
The public sector is also looking to innovate too. Just last month, the Queensland Government announced that it has launched two chatbots named MANDI and SANDI to assist Queenslanders in resolving “neighbourhood disputes”. I assume that in Queensland this is primarily based around arguments about who has the bigger knife.
Actually, MANDI and SANDI assist with issues related to “noise”, “trees”, “fences” or “talking to a neighbour”, according to my brief encounter with MANDI.
While MANDI is designed to walk people through a guided conversation, SANDI is a conversational chatbot for people that have already lodged complaints with the Queensland Civil and Administrative Tribunal.
There is no danger of failing to disclose MANDI and SANDI are robots here, they have even gone so far as to include a picture of a friendly cartoon robot waving at you:
As I've argued before, from a CX point of view, bot disclosure makes sense as the importance of transparency trumps any of the perceived benefits to be had from pulling the wool over your customers' eyes.
Outside of chatbots, Wallace says that AI regulation is happening in areas such as military weapons and self-driving cars.
Furthermore, in January, Wallace told Fifth Quadrant that she had just met with Australia’s Human Rights Commission, which is placing a very strong agenda around the impact of technology on human rights.
Regardless, at present Australia has nothing that approximates the law passed in California. So should it?
From a common sense point of view, regulation of AI seems like a no-brainer. Here we have extremely powerful technology that is quickly attaining the ability to perfectly mimic human speech and behaviour. This sounds absolutely like something that should be regulated. But, as the California law shows, there is still a bit of a minefield that needs to be navigated.
One of the more interesting discussions to arise in the US has been the concept of whether or not bots are, in fact, protected by the First Amendment. That probably won’t be an issue in Australia – as, you may be surprised to discover, we don’t have free speech laws.
When it comes to AI regulation, it is more of a question of “when” rather than “if”.
Unfortunately, the Australian Government has a less than stellar record when it comes to technology regulation. It is not that they don’t do it, it’s that they do it badly. For example, in 2017 the Government’s new data retention laws rightly came under fire for their infringements on privacy and last year they were at it again with passing of a controversial encryption bill. This does not bode well for navigating the complexities and nuances of regulating AI.
Oh well, we can always leave it to AI lawyers to handle.