Alan Kohler: How large language models and large money go together

There’s too much money being made in AI to regain any semblance of control, Alan Kohler writes.

There’s too much money being made in AI to regain any semblance of control, Alan Kohler writes.

AI chip maker Nvidia was briefly the world’s most valuable company last week, having risen 40 per cent in a month, which is what small companies do, not big ones like that.

Also Geoffrey Hinton, the British-Canadian  computer scientist known as the father of artificial intelligence, told 60 Minutes that AI might be more intelligent than we know already and will eventually “take over”, whatever that means.

And one of the founders of Open AI, Ilya Sutskever, launched his own start-up to take on Open AI, called Safe Superintelligence. Presumably his won’t take over.

So this might be a good time to step back and reflect some more about what is going on.

Money first

First, the idea that AI will take over and enslave humanity is worrying, for sure, but this is a product that everyone thinks will make tons of money, and so as periodically happens in capitalism, there’s a mad scramble to be the winner, oblivious to dangers, accompanied by stupendous amounts of money and perplexed enthusiasm in the media.

It is already way beyond the capacity of governments to control, even if they wanted to, which they don’t because they’re in a parallel geopolitical contest, especially the US and China.

Maybe one day we’ll wake up and say: “Oh, whoops, AI wasn’t such a great idea”, as we did with slavery, fossil fuels, tobacco and sugar but for the moment, it’s happening – there’s too much money being made. Stand back.

How AI works

There are three basic forms of AI: Generative, which involves creating stuff, Cognitive, making decisions, and Analytical, which does what its name suggests.

The focus at the moment is on generative AI, and the most common form of it, where most of the money is going, are large language models (LLMs).

An LLM is a type of algorithm that generates human-like text from large amounts of data – basically everything on the internet, which is everything ever written or said.

Nvidia briefly became the world’s most valuable company because the mushrooming developers of LLMs need its graphics processing units (GPUs) to train them, and for the moment, no others will do.

Game on

Nvidia originally designed them in the 1990s to run video games because the central processing units (CPUs) in computers weren’t fast enough.

During the early 2000s, after the internet crash, a lot of scientific work went into semantics, language modelling and word techniques because the internet was still there and colossal volumes of data were becoming available so it was a worthwhile project to make better use of it than just with Google search.

Around 2016, Nvidia found that its video game GPUs were also good for AI and starting selling them for that. Add those super-fast processors to the new language algorithms and all the data on the internet and you get software that could sort through everything ever written and answer questions with normal human language.

Data ‘training’

For the past five or so years a few geeky companies have been “training” LLMs on all the data on the internet, and the first to break out and go public was a not-for-profit named Open AI, with ChatGPT on November 30, 2022.

There are now dozens of for-profit outfits vying to catch up and get attention (Wikipedia has a list of them here); Open AI has kind of gone for-profit as well with Microsoft, which is why good guy Ilya Sutskever is now attacking it.

Investors and venture capitalists are piling on this new industry like one of those trains in India, and technology is once again eating the US stockmarket as it did in the 1990s – not just Nvidia but anything connected to AI is going crazy. The technology sector is now a third of the US market.


My favourite LLM at the moment is, founded in 2022 by a group of blokes from Open AI, Meta and Google, with $US165 million from Jeff Bezos and Nvidia. I also like Google’s Gemini, Amazon’s Alexa and, of course, ChatGPT. But I’ve tried only a fraction of the ones that are available, and I use only a fraction of what they can do.

The key to the quality of an LLM is the number of parameters. I asked a few of them to tell me what a parameter is in AI, and the best answer came from Google’s Gemini (note that this wasn’t copied from somewhere, Gemini thought of it itself):

“In AI, parameters are essentially the adjustable dials within a model that are fine-tuned during the training process. They are like the knobs on a radio that you adjust to get the right station.

“Imagine you’re training a model to recognise different types of flowers. The model analyses pictures of roses, daisies, and tulips (the training data). It adjusts its parameters (the dials) based on the features it sees in these images (petal shape, colour, etc.). By adjusting these parameters, the model learns to distinguish between roses, daisies, and tulips (improved performance).”

The latest Perplexity model has 70 billion parameters (I had to get that from Google – Perplexity wouldn’t tell me).

Open AI hasn’t revealed how many parameters are in ChatGPT-4, but estimates range from 1.7 trillion (Wikipedia) to 100 trillion parameters on this website, which is the same number of synapses in a human brain. I doubt that it’s 100 trillion, but that seems to be the general goal – human-level intelligence.

Next step

So where is all this heading?

Well, one thing is for sure: None of the companies developing LLMs will make any money if all they do is replace Google search with a single answer to every question instead of having to look through a whole lot of links.

That’s useful, for sure, and everyone will eventually do it and stop using ordinary Google, but there are too many companies doing it for any of them to make much money, and it’s hard to run ads like Google AdWords against a single answer.

Then again, there were a lot of struggling search engines in the early days of the internet, and Google ate or killed them all, and maybe Google will successfully transition its vast search market share to Gemini. Or maybe ChatGPT will win. Or another one. I don’t know.

Also LLMs just deal in text, and everyone is using voice, images and video these days, which is why Instagram, TikTok and YouTube are winning the internet, and text publishers are losing. No one reads any more.

But these are early days, like dial-up internet in the early 2000s before broadband and wi-fi.

Just wait …

Every piece of existing technology will soon contain AI, generative, cognitive and analytical, and a flood of new products will come on the market, such as autonomous cars and trucks, labour-saving devices for companies (e.g. AI journalists) and intelligent robots, especially them.

I suspect the robots will be the next greatest consumer product ever invented after the iPhone.

You’ll buy a robot at JB Hi-Fi or Harvey Norman for, say, $10,000 to $20,000, and it will do the housework and gardening, mind the children and do their homework, manage your diary and discuss Plato and Nietzsche with you.

Oh, and the one you buy at a shadier shop down the road for a bit more money will have sex with you as well.

Then they’ll take over.

(I’m indebted to Mohamed Abdelrazak, professor of applied artificial intelligence at Deakin University for help with this item, although we didn’t discuss the sex bit.)

Alan Kohler writes weekly for The New Daily. He is finance presenter on the ABC News and also writes for Intelligent Investor.

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter.
Copyright © 2024 The New Daily.
All rights reserved.