Advertisement

Labor wants to regulate AI – here are some of the hurdles to doing so

The Australian government must navigate numerous challenges as it tries to regulate artificial intelligence, experts warn, including the shipping of jobs to countries with weaker regulations, curtailing the power of tech companies and battling biased data.

The Labor Party is developing a policy position and framework for the use of AI within Australia, which is set to become part of its national platform, while the Australian Council of Trade Unions (ACTU) has called for a national body to regulate policy.

Dr Dana McKay, senior lecturer in Innovative Interactive Technologies at RMIT University, said there is growing interest in Australia and globally in promoting the ethical use of AI language models.

“There is growing consensus internationally around the way these particular generative AI models should work, which is quite a good thing,” Dr McKay said.

“Things like fair compensation for content creators, and particularly for music and images.”

Currently, there is no regulation of the use of AI language models in Australia, although the federal government has introduced voluntary guiding principles for businesses to responsibly design, develop and implement AI solutions in the workplace.

Workers v automation

Dr McKay said legislation and policies banning the automation of jobs could have the opposite effect by encouraging companies to go offshore to countries without restrictions.

“This isn’t new. Offshoring has happened since the start of international travel and the internet because it was always possible to get things done offshore cheaper than it is here,” she said.

“In some ways, it is just another technology and regulating it should be principles-based.”

The ACTU was contacted for comment.

Powerful tech companies

Australia has previously faced off with tech companies when it introduced the News Media Bargaining Code in 2021, which briefly resulted in Facebook removing news for Australian users.

Pictured is the TikTok app, along with other social media applications.

The News Media Bargaining code pitted the Australian government against multinational tech companies. Photo: Getty

Dr McKay said while attempting to regulate AI, there is the possibility some multinational organisations will move to countries with more favourable conditions.

“If they aren’t based in Australia, it will obviously limit what the government can do,” she said.

“If people don’t like the version of ChatGPT that they can access in Australia, they’ll be able to VPN into the US site or wherever they might want to go to access a different version.”

The challenges facing Australia as it regulates AI aren’t unique, as the European Union also aims to introduce its own AI Act by the end of the year.

It includes significant fines for companies – up to $64 million – that risk people’s safety by using AI to apply social scoring; using cognitive behavioural manipulation; using real-time facial recognition in publicly accessible spaces, and predictive policing.

Truth in the middle

Damith Herath, associate professor in robotics and art at the University of Canberra, said there are arguments both for and against regulating disruptive technology.

“The truth lies somewhere in the middle: There is definitely a need for regulation when we know there’s inherent harm,” he said.

“The important thing right now is to have a conversation between these opposing views and see what works really well for Australia.”

The AI Act will also force generative models like ChatGPT to disclose the content created using AI; prevent illegal content from being generated, and publish summaries of copyrighted data used for training the model.

Addressing biases

Dr McKay said another issue is that guidelines in Australia don’t specifically address biases prevalent in the material used to train generative AI.

“There’s a famous study from the US of AI-based sentencing decisions disproportionately giving longer sentences to African-Americans, simply because the data it was trained on showed African-Americans got longer sentences,” she said.

“I’m particularly concerned about automated decision-making in an Australian context because the vast majority of data underpinning these models comes from overseas.”

AI systems learn and make decisions based on the data it is provided for training, which often include the biases of the people who created the data or reflect historical and social inequalities.

Dr Herath said it is just as important to consider whether companies have a moral right to use public data for commercial purposes.

“There are moral and ethical considerations that are much more worrisome than where the data comes from,” he said.

“Some of these new technologies are in a rapid phase of development and are quite difficult to keep up with compared to the last digital and industrial revolutions, and that is where government needs to look at the ramifications of these tools.”

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.