Nations — Canada included — are running out of time to design and implement comprehensive safeguards on the development and deployment of advanced artificial intelligence systems, a leading AI safety company warned this week.
In a worst-case scenario, power-seeking superhuman AI systems could escape their creators’ control and pose an “extinction-level” threat to humanity, AI researchers wrote in a report commissioned by the U.S. Department of State entitled Defence in Depth: An Action Plan to Increase the Safety and Security of Advanced AI.
The department insists the views the authors expressed in the report do not reflect the views of the U.S. government.
But the report’s message is bringing the Canadian government’s actions to date on AI safety and regulation back into the spotlight — and one Conservative MP is warning the government’s proposed Artificial Intelligence and Data Act is already out of date.
AI vs. everyone
The U.S.-based company Gladstone AI, which advocates for the responsible development of safe artificial intelligence, produced the report. Its warnings fall into two main categories.
The first concerns the risk of AI developers losing control of an artificial general intelligence (AGI) system. The authors define AGI as an AI system that can outperform humans across all economic and strategically relevant domains.
While no AGI systems exist to date, many AI researchers believe they are not far off.
“There is evidence to suggest that as advanced AI approaches AGI-like levels of human and superhuman general capability, it may become effectively uncontrollable. Specifically, in the absence of countermeasures, a highly capable AI system may engage in so-called power seeking behaviours,” the authors wrote, adding that these behaviours could include strategies to prevent the AI itself from being shut off or having its goals modified.
In a worst-case scenario, the authors warn that such a loss of control “could pose an extinction-level threat to the human species.”
“There’s this risk that these systems start to get essentially dangerously creative. They’re able to invent dangerously creative strategies that achieve their programmed objectives while having very harmful side effects. So that’s kind of the risk we’re looking at with loss of control,” Gladstone AI CEO Jeremie Harris, one of the authors of the report, said Thursday in an interview with CBC’s Power & Politics.
The second category of catastrophic risk cited in the report is the potential use of advanced AI systems as weapons.
“One example is cyber risk,” Harris told P&P host David Cochrane. “We’re already seeing, for example, autonomous agents. You can go to one of these systems now and ask,… ‘Hey, I want you to build an app for me, right?’ That’s an amazing thing. It’s basically automating software engineering. This entire industry. That’s a wicked good thing.
“But imagine the same system … you’re asking it to carry out a massive distributed denial of service attack or some other cyber attack. The barrier to entry for some of these very powerful optimization applications drops, and the destructive footprint of malicious actors who use these systems increases rapidly as they get more powerful.”
Harris warned that the misuse of advanced AI systems could extend into the realm of weapons of mass destruction, including biological and chemical weapons.
The report proposes a series of urgent actions nations, beginning with the U.S., should take to safeguard against these catastrophic risks, including export controls, regulations and responsible AI development laws.
Is Canada’s legislation already defunct?
Canada currently has no regulatory framework in place that is specific to AI.
The government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27 in November of 2021. It’s intended to set a foundation for the responsible design, development and deployment of AI systems in Canada.
The bill has passed second reading in the House of Commons and is currently being studied by the industry and technology committee.
The federal government also introduced in 2023 the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, a code designed to temporarily provide Canadian companies with common standards until AIDA comes into effect.
At a press conference on Friday, Industry Minister François-Philippe Champagne was asked why — given the severity of the warnings in the Gladstone AI report — he remains confident that the government’s proposed AI bill is equipped to regulate the rapidly advancing technology.
“Everyone is praising C-27,” said Champagne. “I had the chance to talk to my G7 colleagues and … they see Canada at the forefront of AI, you know, to build trust and responsible AI.”
In an interview with CBC News, Conservative MP Michelle Rempel Garner said Champagne’s characterization of Bill C-27 was nonsense.
“That’s not what the experts have been saying in testimony at committee and it’s just not reality,” said Rempel Garner, who co-chairs the Parliamentary Caucus on Emerging Technology and has been writing about the need for government to act faster on AI.
“C-27 is so out of date.”
AIDA was introduced before OpenAI, one of the world’s leading AI companies, unveiled ChatGPT in 2022. The AI chatbot represented a stunning evolution in AI technology.
“The fact that the government has not substantively addressed the fact that they put forward this bill before a fundamental change in technology came out … it’s kind of like trying to regulate scribes after the printing press has gone into widespread distribution,” said Rempel Garner. “The government probably needs to go back to the drawing board.”
In December 2023, Gladstone AI’s Harris told the House of Commons industry and technology committee that AIDA needs to be amended.
“By the time AIDA comes into force, the year will be 2026. Frontier AI systems will have been scaled hundreds to thousands of times beyond what we see today,” Harris told MPs. “AIDA needs to be designed with that level of risk in mind.”
Harris told the committee that AIDA needs to explicitly ban systems that introduce extreme risks, address open source development of dangerously powerful AI models, and ensure that AI developers bear responsibility for ensuring the safe development of their systems — by, among other things, preventing their theft by state and non-state actors.
“AIDA is an improvement over the status quo, but it requires significant amendments to meet the full challenge likely to come from near-future AI capabilities,” Harris told MPs.