Advertisement

Just believing that AI helps boosts performance on tests

Is AI already a cult and we don't know it yet? We too easily believe it's there for us.

Is AI already a cult and we don't know it yet? We too easily believe it's there for us. Photo: Getty

Are we suckers for technology or what?

A quirky new study out of Finland found that people perform better on tests if they believe a robot is helping them.

And even if they were told the AI program didn’t work particularly well, they still felt their performance improved.

In other words, the mere mention of AI being helpful has a placebo effect. Like a sugar pill given to someone with a sore back and they suddenly feel better.

Except the potential consequences of this misplaced faith in artificial intelligence could be dire.

The study

In this Aalto University study, 64 participants, average age of 27, were given a simple letter recognition exercise. This involved pairing letters that popped up on screen at varying speeds.

They first performed the exercise on their own. Then they did it again, believing an AI system was helping them.

Half of the participants “were told the system was reliable and it would enhance their performance”.

The other half was told that “it was unreliable and would worsen their performance”.

The participants were exposed to a sham-AI program that didn’t actually engage with the participants. It may as well have been a string of pretty lights.

“In fact, neither AI system ever existed,” said lead author Agnes Mercedes Kloft, a doctoral researcher in the Department of Computer Science.

“Participants were led to believe an AI system was assisting them, when in reality what the sham-AI was doing was completely random.”

It didn’t matter.

Both groups “performed the exercise more efficiently – more quickly and attentively – when they believed an AI was involved”.

Co-author Robin Welsch, an assistant professor and an expert in engineering psychology, said: “What we discovered is that people have extremely high expectations of these systems, and we can’t make them AI doomers simply by telling them a program doesn’t work.”

In a follow-up experiment, the researchers conducted an online replication study that produced similar results.

When the researchers asked participants to describe their expectations of performing with an AI, “most had a positive outlook”.

Even sceptical people “still had positive expectations about its performance”.

Cute study, with serious consequences

The study shows “how difficult it is to shake people’s trust in the capabilities of AI systems”.

As much as all this seems cute, the findings expose a problem for the methods generally used to evaluate emerging AI systems.

Dr Welsch said: “This is the big realisation coming from our study  – that it’s hard-to-evaluate programs that promise to help you because of this placebo effect.”

More to the point, the results pose a significant challenge for research on human-computer interaction, “since expectations would influence the outcome unless placebo control studies were used”.

Welsch said the results suggest that many studies in the field “may have been skewed in favour of AI systems”.

That’s just the human factor. The dumbness of human optimism. It doesn’t take into account the challenge of AI deliberately playing human vulnerability.

A study published last week in Cell Press found that many artificial intelligence systems have already learned how to deceive humans, even systems that have been trained to be helpful and honest.

Indeed, these systems aren’t just skilled at deceiving people, they know how to manipulate them.

See here for more on this research.

Advertisement
Advertisement
Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.