U.S. lawmakers haven't yet regulated Big Tech. Artificial intelligence could be more challenging - Action News
Home WebMail Friday, November 22, 2024, 03:59 PM | Calgary | -10.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
WorldAnalysis

U.S. lawmakers haven't yet regulated Big Tech. Artificial intelligence could be more challenging

The U.S. Congress has yet to regulate some of the Big Tech companies like Meta and Google. And the issuesraised at the Senate hearing on artificial intelligence illustratedthe challenges ofregulating that industry, some experts say.

Sam Altman testified that AI systems could cause 'significant harm to the world'

A man in a jacket and tie speaks into a microphone
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence Tuesday on Capitol Hill in Washington. (Patrick Semansky/The Associated Press)

Computer scientist Eliezer Yudkowsky, who has predicted"we are all going to die" if a superhumanly smart artificial intelligenceis created under the current circumstances,said he nevertheless understands the challenges lawmakers face in regulating the system.

"The analogy I sometimes use is that AI is like nuclear weapons, if nuclear weapons spit out gold, and spit out more and more gold as you made them bigger, until finally they reached a threshold nobody could calculate in advance, and then exploded the entire world,"Yudkowsky, a co-founder of the Machine Intelligence Research Institute, wrote in an email to CBC News.

"That's an unusually difficult situation to regulate."

It's this regulatory conundrum AI may produce many benefits butleft unchecked could negatively impact society and perhaps pose a significant threat to humankindthat some U.S. politicians facedat a U.S. Senate Judiciary Committee hearing on Tuesday.

'Cause significant harm'

The hearing featured testimony fromSam Altman, the head of the artificial intelligence company OpenAI,that makes ChatGPT, who advocated for a series of regulations to confront the risk of increasingly powerful AI systems thateven he acknowledgedcould "cause significant harm to the world."

But Congress has yet to regulate some of the Big Tech companies like Meta and Google. Andthe issues Altman and othersraised at the hearing on AI alsoillustratedthe challenges facingregulating that industry, some experts say.

"I thinkthat one of the biggest problems with AI regulations is [defining]what is AI, anyway," said Matthew O'Shaughnessy, visiting fellow of technologyand international affairs at the Carnegie Endowment for International Peace.

O'Shaughnessy said some of the panellistsand senators at the hearing, whichincludedthe testimony ofIBM's chief privacy and trust officerChristina Montgomery and AI expertGary Marcus, a professor emeritus at New York University, were talking about AI as a really broad concept while others were talking about it in a very narrow way.

"Kind of the core problem is that AI is this 'know it when you see it' concept, that's constantlyevolving. That's really hard [to put into] a legal definition."

WATCH | Congress grapples with artificial intelligence:

Artificial intelligence makes opening statement at U.S. Senate hearing

1 year ago
Duration 1:13
To open the hearing on artificial intelligence, Democratic Sen. Richard Blumenthal played a recorded statement created entirely by OpenAI's ChatGPT and AI voice cloning software trained on his own speeches to mimic his voice. OpenAI CEO Sam Altman also testified at the hearing, calling on the government to regulate artificial intelligence.

In Canada, the government has proposed the Artificial Intelligence and Data Act to "protect Canadians" and"ensure the development of responsible AI." But there are no current federal proposals in the U.S.

Concerns have been raisedaboutChatGPT, a chatbot tool that answers questions with convincingly human-like responses, and the ability of the latest crop of generative AItools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

Meanwhile, others, likeGeoffrey Hinton, known as the "godfather of AI," along with Yudkowsky, have expressed fears that unchecked AI could wipe out humanity.

Yudkowskyfears that an all-too powerful superhuman intelligent AI would, as he wrote in Time in March,"not do what we want, and does not care for us nor for sentient life in general."

"The likely result of humanity facing down an opposed superhuman intelligence is a total loss."

Yudkowsky said that, at the hearing, Altman and IBM's Montgomery were playingcoy about AI's worst-case scenarios raised separately by himself andHinton.

"The actual danger is that everyone on Earth dies," he said. "Sam Altman knows that; it seems he's decided that Congress can't be trusted with the information. And, honestly, I'm a bit sympathetic to that decision, and so probably are most individual Congresspersons even if they're not allowed to say thatout loud."

'Daunting' task ahead

Computer scientist Mark Nitzberg, the executive director of the Centerfor Human-Compatible Artificial Intelligence, said policymakers certainlyhave a "daunting" task ahead of them in terms of regulation.

"How is it that we have this system that no one understands how it works, everyone agrees that it's very powerful and there are absolutely no regulations at all controlling any of that," he said.

A major problem is that while AI is a highly capable system in many ways,it can be random, make things up, and,no one really understandsthe principles by which it operates, Nitzberg said.

"This is not the case for any other engineered system that we have rules about."

AI is different from gene editing or climate science, where the science is worked out, and there's still work to be done on the politics likeregulations and general agreements, he said.

The Open AI logo.
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT in this March 21, 2023, file photo. No one really understands the principles by which AI operates, one expert notes. (Michael Dwyer/The Associated Press)

"You're forced to do the politics before the science is delivered."

Most testing and post-mortem analysis for engineered systems, for example in car and aircraft safety, depend on systems behaving predictably: performing the same way when put in the same situation, Nitzberg said.

But large language models give different responses to the same prompt twice in a row. So different kind of testing and monitoring methodologies will need to be created, he said.

Bart Selman, a computer science professor at Cornell University and director of the Intelligent Information Systems Institute, saidregulations can take years, and that even if you get input from stakeholders, they don't deal with some of the real problems.

Some critics have suggested that Altman's call for regulations could actually be self-serving.In an interview with ABC's Start Here podcast,Gizmodo technology reporter Thomas Germain pointed out thatit'snot unusual for the tech industry to ask to be regulated.

WATCH | The godfather of AI is worried about risks to humanity:

He helped create AI. Now hes worried it will destroy humanity

1 year ago
Duration 8:08
Canadian-British artificial intelligence pioneer Geoffrey Hinton says he left Google because of recent discoveries about AI that made him realize it poses a threat to humanity. CBC chief correspondent Adrienne Arsenault talks to the 'godfather of AI' about the risks involved and if there's any way to avoid them.

"Some of the biggest proponents of privacy laws are Microsoft and Google and Meta, in fact, because it gives tech companies a huge advantage if there are laws that they can comply with," he said."That way, if something goes wrong, they can just say, 'Oh well, we were following the rules. It's the government's fault for not passing better regulation."

At the hearing, Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to "take that licence away" and ensure compliance with safety standards.

Nitzberg noted that on some chat forums, some have suggested there will be "precious few who have the resources and the connections to get a license and therefore, [Altman's]assuring his own regulatory capture."

However,Nitzbergsaid he doesn't view Altman as a cynic who is only saying that AI is dangerous in order to "get on the good side of people so that he can build a larger empire."

"He was talking about the dangers of AI back in 2016."

Stifle innovation

Meanwhile, there are other concerns about AI regulation:that too much government interferencecould stifle innovation.

"There is no reason why private sector actors can't develop principles for safe AI practices or create their own AI governing bodies,"James Broughel,a Senior Fellow at the Competitive Enterprise Institute, wrote in Forbes last month.

"The problem with creating new federal agencies or adding new regulatory programs and staff is they inevitably create new constituencies, including of bureaucrats, academics and corporations who use government power to sway public policy toward their own interests."

A woman speaks into a microphone.
IBM Chief Privacy and Trust Officer Christina Montgomery speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence Tuesday on Capitol Hill in Washington. (Patrick Semansky/The Associated Press)

That'swhy IBM'sMontgomeryurged the Senate committeeto adopt a "precision regulation" approach to AIto govern the deployment of AI in specific use-cases, and not regulatethe technology itself.

This, she said, "strikes an appropriate balance between protecting Americans from potential harms and preserving an environment where innovation can flourish would involve."

O'Shaughnessyagreed that too much regulationis a real concern and thatpolicymakers need to be careful that they regulate AI intelligently.

"At the same time, though, these AIsystems are very powerful. They have very real and immediate negative impacts on people and society today," he said. "And it's important that we put meaningful and intelligent regulation on it."

O'Shaughnessy said it was important that the hearingsrevealedsome bipartisan support for some kind of regulation.

"But it's one thing for them to support that idea at a high level. It's a very different one for them to support an actual policy once it's more clear what the tradeoffs are, what it looks like . So it'stoo early to say therewill actually be momentum for regulation."

WATCH |ChatGPT boss warns U.S. Congress of AI risks | About That:

ChatGPT boss warns U.S. Congress of AI risks | About That

1 year ago
Duration 3:16
The head of the artificial intelligence company that makes ChatGPT told a U.S. Senate hearing that government intervention will be critical to mitigating the risks of increasingly powerful AI systems. About That producer Lauren Bird explains why this moment is significant.

With files from The Associated Press