Subscriber OnlyTechnology

Karlin Lillington: Getting to grips with AI

What combination of algorithm, data input and analysis crosses the line between machine learning and artificial intelligence

I was surprised by the subject heading of an email in my inbox this week: “The functionality behind robo-advisers.”

Robo-advisers? How retro. I had visions of Arnold Schwarzenegger thrusting handfuls of investment brochures my way. The email was from the US investment management giant Vanguard, and, curious, I clicked through on the link for more information.

Forgive my investment ignorance, as I am sure so many of you knew all about this, but I had no idea robo-advisers were now commonplace, particularly in the US, or that this shift towards using automated digital investment managers had begun in earnest more than five years ago. Vanguard offers one such service.

Until fairly recently, even tech journalists weren’t being bombarded with AI-speak from companies

For me, the term had stood out because it seemed so … dated. I’d have expected something like “financial AI”.


Vanguard avoided calling its offering an AI, but on further exploration, there seems to be no broad industry agreement on whether robo-advisers are AIs or something, what, lesser? Some say they are AIs, some say no, and some “robo” financial services are described as “hybrid AIs”.

So what makes an AI an AI? At what point does some combination of algorithm, data input and analysis cross the line between plain old machine learning and (trumpet fanfare) artificial intelligence? Is the double-barrelled vowel pairing mostly a marketing ploy? Does it matter?

Well, yes, it does. As the tech world increasingly seems ready to call almost anything an AI, such questions grow ever more pertinent and the answers will shape cultural, societal, business and — critically — regulatory assumptions. Yet as the term “AI” becomes so commonplace, the risk of confusion alongside unwanted outcomes or unanticipated effects expands.

Until fairly recently, even tech journalists weren’t being bombarded with AI-speak from companies. The term was mostly confined to tech giants talking about mostly aspirational services of the sometime-future and, even then, many of us scoffed. But everything has changed, with the breathless excitement that has greeted ChatGPT, the online AI (though, is it really an AI?) which answers questions and responds to requests with grammatically correct, competently written replies.

According to Google Trends, searches for “AI” remained fairly static for most of last year until the end of September. Over the next four months, they doubled in volume. What’s more interesting though, is that searches for the term AI were flat for the previous 10 years. Nothing happened to spike increased interest in something most of us probably equated with sentient robots or Star Trek’s Mr Data, and no one was imagining their arrival anytime soon.

IT Business Person of the Year Barry Connolly: ‘I never really wanted to work for anyone else’

Listen | 46:13

But 2022, especially the tail end, was the year of AI. A slow increase in searches began in 2022 and then came a massive spike in November.

Why? If you compare that trend to the scale of searchers for the term “ChatGPT” — which was released to early users in, yes, November 2022 — the surge in searches is an exact overlap. ChatGPT clearly drove interest in AIs. And yet, many experts argue that ChatGPT is not intelligence, and not an AI, but very snazzy machine learning. To confuse the two, some argue, is misleading, even dangerous.

Certainly, an “intelligence” based on an input of the undifferentiated slurry of information on the internet, or of data sets known to suffer from all sorts of bias (unintentional bias is bias nonetheless), load an AI with a “mind” awash with misinformation and trash it sees as just more data points. It’s not at all clear how the junk is to be filtered — or even, who determines what is junk. We’ve already seen that what some technologists believe to be unbiased, representative, diverse data can be horribly excluding and unrepresentative.

ChatGPT itself doesn’t offer much clarification on how to define a robo-adviser, and whether they are AIs. “A robo-adviser is a type of financial adviser that uses algorithms and computer programs to provide investment advice and manage portfolios. Some robo-advisers use artificial intelligence (AI) to make investment decisions or provide recommendations, but not all of them do.” The circle goes round and round.

You’ll still be hard-pressed to understand what the EU intends to regulate as an AI

Right now, the EU is finalising advanced legislation to address the regulation of AIs. A compromise version of this proposed Artificial Intelligence Regulation (AI Act) was approved by the EU Council on December 6th, and next month it goes to the vote of the European Parliament, then the “trilogue” process begins between member states, parliament and the commission with final approval expected by the end of this year.

You’ll still be hard-pressed to understand what the EU intends to regulate as an AI, despite the Act’s probable reach and international impact. As the Act will extend to companies outside the EU, and like the GDPR, allows massive fines, it will shape how an AI is defined globally. Yet the current draft is murky on what it’s regulating, defining an AI as a system with an “element of autonomy”.

Legal experts warn that this risks creating an uncertain regulatory landscape too open to interpretation. Pity the poor robo-advisers who, even now, must be fretting over whether they are, or are not, AIs subject to this future Act.