We cannot afford to drop the ball on AI regulation as badly as we have with social media, because the consequences will be far worse.
What were you doing in 2004? I remember sitting on the edge of my seat at the movies, cheeks filled with popcorn, as I stared into the future on the big screen ahead.
In this future, robots mingled alongside humans, many of them doing jobs like walking dogs, delivering parcels and collecting rubbish bins. Others, romanced humans and lurked suspiciously in the shadows.
The world of iRobot seemed like one I may never live to see, but 22 years later, it looks like a near and scary reality.
Artificial intelligence (AI) models are part of many people’s everyday lives, both personally and professionally. One of the most popular, ChatGPT, responds to more than 2.5 billion prompts a day. It is one of hundreds of AI models worldwide. Meanwhile, chatbots help with everything from losing weight to feeling loved.
It continues to take over our lives – there for the memorable and the mundane, an ever-present companion predicted for decades.
But who polices it? Not just the end product but also its research and development?
It would be reckless to just leave oversight to companies that depend on it for profit. We have seen this movie before with the development of social media and how big tech companies ran amok for decades, destroying millions of young lives all to amass more eye-watering profits for their investors.
The questions of regulation and even direct government intervention have sadly taken too long to finally reach the ears of politicians.
But it did this week in the US, UK and Korea, when Anthropic’s new AI model, Mythos, showed such alarming hacking and vulnerability-detection skills that emergency meetings were called with banking executives and, likely, with government departments to analyse how much harm it could do to critical infrastructure.
The model is said to be so advanced that it has been restricted from the public and is being scrutinised the world over.
Its potential for avoiding hacks is great, but so is its ability to exploit- an uncertainty that should concern governments everywhere.
SA also at risk
There is no indication that such urgent meetings also happened in South Africa, but the local banking sector is dealing with its own crisis.
One of South Africa’s biggest banks, Standard Bank, recently had its infrastructure rattled by a data breach, with details emerging this week that sparked alarm.
It may have told customers that its banking systems were safe, but that may have been more a plea for clients and investors not to flee. The fact that the bank continues, weeks after the breach, to reveal new discoveries about how its customers’ personal information has been exposed shows it may not exactly have a handle on things.
This breach isn’t the first, and it won’t be the last; neither is Standard Bank the only institution being targeted.
But the sheer number of unanswered questions surrounding the breach hints at a future of uncertainty. One where, wave after wave, we wonder if vital digital infrastructure can hold firm.
Its collapse would be more catastrophic than the already devastating plague of stolen transformers, illegal connections, dodgy water tankers, rusty old pipes and full landfills.
We need rules
In the dystopian world of iRobot, there are three laws for robots: to never harm a human or let a human come to harm, to always obey humans unless this violates the First Law; and to protect its own existence unless this violates the First or Second Laws.
They are basic laws that were not created by a legitimate government but by Hollywood – more than 20 years ago – but at least someone had a crack at it.
Legislators would do well to at least look at this, because right now, there seems to be no one to protect us from the harm on the virtual horizon.
Support Local Journalism
Add The Citizen as a Preferred Source on Google and follow us on Google News to see more of our trusted reporting in Google News and Top Stories.