Advertisement

AI is the latest weapon in the arsenal of school bullies

School bullies can easily use the latest AI generative technology to target victims.

School bullies can easily use the latest AI generative technology to target victims. Photo: Getty

The Australian online safety regulator has received its first reports of sexually explicit, artificially generated imagery being used by students to bully others – signalling the latest evolution of online bullying.

On Tuesday, the eSafety Commissioner warned the artificial intelligence (AI) industry must take action, with AI-generated child sexual abuse material and deepfakes already being reported to eSafety investigators.

The details of the child-bullying cases were not made public, but the commissioner said they came after a small but growing number of reports of “distressing and increasingly realistic deepfake porn”.

“This month, we received our first reports of sexually explicit content generated by students using this technology to bully other students,” eSafety Commissioner Julie Inman Grant said.

“The danger of generative AI is not the stuff of science fiction.”

‘Incalculable harm’

“Harms are already being unleashed, causing incalculable harm to some of our most vulnerable.”

Ms Inman Grant said it’s becoming harder to tell what’s real and what’s fake online, meaning it’s much easier for AI-generated material like deepfakes to inflict damage.

Online bullying is nothing new, but the ability to create realistic images or videos of other people in various scenarios has the potential to be incredibly harmful.

And while it may be easy to have harmful material depicting children taken down from legitimate websites, there’s never a guarantee the material has been deleted from people’s personal devices.

AI tools easier than ever to use

Tama Leaver, professor of internet studies at Curtin University and chief investigator at the Australian Research Council Centre of Excellence for the Digital Child, said it’s easy to feed AI tools a source image to be altered according to your particular request.

Although the more mainstream tools block the creation of nudity or other pornographic material, he said there will always be tools catering to those type or requests if people look for them hard enough.

And even when the ability to create nude or sexually explicit material is blocked, bullies can use AI to generate materials that could be deemed offensive in other ways, such as gender swapping.

“I doubt it’s, at this point in time, terribly hard to find a way to use these tools … essentially for abuse,” he said.

“We’ve had tools of that nature around for some time; we’ve been talking about deepfake technologies for at least three or four years, in a mainstream way. But [what’s] happening with these generative AI tools is that the actual effort involved to create something has gone down dramatically.

“It’s not a surprise to think that when new creative tools come along, there will be a small number of people playing with them for the purposes of bullying and harm.”

Education is the way forward

With just one source image being enough for most generative AI tools to create realistic-looking deepfakes, some parents’ gut reaction may be to yank their child – and their child’s images – from the internet altogether.

But Dr Leaver said, while people should pay attention to their privacy settings, they shouldn’t be excluded from the “joy of connectivity and sharing” because of “a few bad actors”.

He said children should also be allowed to experiment with AI tools, as they will likely play a big role in future careers and everyday life.

Instead, the focus should be put on broader digital literacy, including educating young people about the consequences of using AI with ill intent.

Early education on the pitfalls of the internet and AI could be key to helping generations of kids raised with personal devices. Photo: Getty

“I don’t think it’s realistic to think that we’re going to block or ban these [generative AI] tools, because there are many legitimate, creative uses of these tools,” Dr Leaver said.

‘A cultural problem’

“The genie’s out of the bottle … so we’re not going to fix this as a technology problem; we’re going to have to fix this as a culture problem.

“We need young people to understand that using somebody else’s likeness in a way that might be experienced as harm, abuse or bullying is something they need to choose not to do, rather than wait for someone to block the tools.”

Ms Inman Grant said if you use the internet, you should also … take time to understand what personal information is being accessed from the open web, and then take steps to protect you data.

More information on popular generative AI services such as ChatGPT and Google Bard can be found on The eSafety Guide.

Parents and carers should also be mindful of how much they share about their children online, as a single photo could end up being ‘harvested’ from social media or other websites and used for unintended purposes, Ms Inman Grant said.

“In the worst cases, these photos can be used to generate child sexual exploitation material and end up on paedophile websites and forums,” she said.

“While it’s a horrifying thought that a photo of a five-year-old can be manipulated for evil, it’s a tragic reality that we’re facing as a global community.”

What to do if your child is targeted

If your child has been targeted by AI-generated bullying material by a fellow student, Dr Leaver said their school should always be notified.

However, there’s not much schools can do to prevent children from using AI in a harmful way when they’re at home.

Depending on the severity and impact of the materials, the police and the eSafety Commissioner are contact options.

Ms Inman Grant said you should keep an eye on any changes in your child’s behaviour which might suggest they’re struggling or being bullied online – and keep reminding them that you always have their back if things go wrong.

“If your child is experiencing cyber bullying, take screenshots and record the URLs before deleting anything. Then report it to the platform where the bullying is happening,” she said.

“If they don’t respond, report it to eSafety.gov.au because we have powers to require the removal of seriously harmful content, including cyber bullying.

“eSafety’s requests to online platforms to remove serious cyber bullying material were successful in 88 per cent of cases last financial year.”

Signs that suggest a child is the target of cyber bullying include:

  • They appear sad, lonely, angry, worried or upset more than usual
  • Unexpected changes in friendship groups or not wanting to be around people, even friends
  • Changes in personality, such as becoming more withdrawn or anxious
  • Changes in sleep patterns, eating or energy levels
  • Becoming secretive about their mobile phone use or what they are doing online.

Production, disseminating, storing and hosting sexually explicit material featuring children are crimes in Australia.

If you know a child being groomed or has had explicit material of them shared, report it to the Australian Federal Police-led Australian Centre to Counter Child Exploitation.

In consultation with law enforcement, eSafety assists with the removal of this content.

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.