BusinessPostCorner.com
No Result
View All Result
Tuesday, June 17, 2025
  • Home
  • Business
  • Finance
  • Accounting
  • Tax
  • Management
  • Marketing
  • Crypto News
  • Human Resources
BusinessPostCorner.com
  • Home
  • Business
  • Finance
  • Accounting
  • Tax
  • Management
  • Marketing
  • Crypto News
  • Human Resources
No Result
View All Result
BusinessPostCorner.com
No Result
View All Result

Altman and Pichai now want governments to regulate A.I.

May 23, 2023
in Business
Reading Time: 4 mins read
A A
0
Altman and Pichai now want governments to regulate A.I.
ShareShareShareShareShare

Artificial intelligence is advancing faster than anyone was prepared for—and it’s starting to scare people. Now, the chiefs of two tech companies that are front-runners in A.I. are sharing the same message—that governments should regulate A.I. so it doesn’t get out of hand. 

On Monday, top leaders of OpenAI, the maker of buzzy A.I. chatbot ChatGPT, said governments should work together to manage the risk of “superintelligence,” or advanced development of A.I. systems. 

“We can have a dramatically more prosperous future; but we have to manage risk to get there,”  OpenAI CEO Sam Altman wrote in a blogpost.

He wasn’t the only one calling for more oversight of A.I. Alphabet and Google CEO Sundar Pichai also proposed that governments should be more involved in its regulation.

“AI needs to be regulated in a way that balances innovation and potential harms,” Pichai wrote in the Financial Times on Monday. “I still believe AI is too important not to regulate, and too important not to regulate well.”

Pichai suggested that governments, experts, academics, and the public all be part of the discussion when developing policies that ensure the safety of A.I. tools. The Google chief also said countries should work together to create robust rules. 

“Increased international co-operation will be key,” Pichai wrote, adding that the U.S. and Europe must work together on future regulation in the A.I. space.

In his blogpost, Altman echoed the idea of better coordination on the safe development of A.I. rather than multiple groups and countries working separately. He said that one way to do this was by getting “major governments around the world” to set up a project that current A.I. efforts could become part of.

His other alternative was for a high-level governance body, akin to the United Nations’ International Atomic Energy Agency (IAEA) that oversees the use of nuclear power.

“We are likely to eventually need something like an IAEA for superintelligence efforts,” Altman wrote, adding that significant projects should be subject to an “international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.” 

Another common thread from the two CEOs was the belief about the revolutionary impact of A.I. on human society. Altman said that superintelligence “will be more powerful than other technologies humanity has had to contend with in the past,” while Pichai repeated his famous proclamation—that A.I. is the “most profound technology humanity is working on.”

OpenAI and Google did not immediately return Fortune’s request for comment.

As tech CEOs call for greater government involvement in regulations, some others argue that government regulation of A.I. will hamper innovation and that companies should regulate themselves. 

“My concern with any kind of premature regulation, especially from the government, is it’s always written in a restrictive way,” former Google CEO Eric Schmidt told NBC News this month. “What I’d much rather do is have an agreement among the key players that we will not have a race to the bottom.” 

Growing calls for regulations

A.I.’s potential risks are getting a lot of attention as the technology improves. In a Stanford University study published earlier this year, 36% of the experts surveyed acknowledged that A.I. would be ground-breaking—but said its decisions could lead to a “nuclear-level catastrophe.” Such tools could also be misused for “nefarious aims” and are often biased, directors at the university’s Institute for Human-Centered A.I. noted. The report also highlighted concerns that top companies could wind up with the most control over A.I.’s future.

“AI is increasingly defined by the actions of a small set of private sector actors, rather than a broader range of societal actors,” the center’s directors wrote. 

Those fears have been voiced at the government level, too. Federal Trade Commission Chair Lina Khan, one of key voices in the discussion about anti-competitive practices, warned that A.I.  could benefit only powerful actors if insufficiently regulated.

“A handful of powerful businesses control the necessary raw materials that startups and other companies rely on to develop and deploy A.I. tools,” Khan wrote in the New York Times this month. This means that control of A.I. could be concentrated with a few players, leading to the tech being trained on “huge troves of data in ways that are largely unchecked.” 

Other A.I. experts, including Geoffrey Hinton, the so-called the “Godfather of A.I.” for his pioneering work in the field, have pointed out the technology’s risks. Hinton, who quit his job at Google earlier this month, said he regretted his life’s work because of A.I.’s potential dangers if put in the hands of bad actors. He also said that companies should only develop new A.I. if they are prepared for what it can do. 

“I don’t think they should scale this up more until they have understood whether they can control it,” Hinton said, referring to tech companies leading in the A.I. arms race.

Lawmakers have begun discussing regulating A.I. Last month, the Biden administration said it would seek public comments about possible rules. And Senate Majority leader Chuck Schumer, D-N.Y., is working with A.I. experts to create new rules and has already released a general framework, the Associated Press reported.  

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” the National Telecommunications and Information Administration’s Alan Davidson said in April, according to Reuters.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Credit: Source link

ShareTweetSendPinShare
Previous Post

HR lessons on race to learn from Uber’s suspension of its DEI chief

Next Post

Strategic Recognition

Next Post
Strategic Recognition

Strategic Recognition

Ukraine Drafts Bill to Let Central Bank Hold Bitcoin in National Reserves

Ukraine Drafts Bill to Let Central Bank Hold Bitcoin in National Reserves

June 11, 2025
Trump’s tax bill backfire: Foreign companies could avoid U.S. investment over steep hikes

Trump’s tax bill backfire: Foreign companies could avoid U.S. investment over steep hikes

June 10, 2025
Goods for Gibraltar must pass through Spain under post-Brexit deal

Goods for Gibraltar must pass through Spain under post-Brexit deal

June 12, 2025
Nonprofit run by former CEO Anne Wojcicki wins bid to acquire 23andMe

Nonprofit run by former CEO Anne Wojcicki wins bid to acquire 23andMe

June 16, 2025
Bitcoin Price Dips 4% After Israeli Strikes on Iran Spark Global Selloff

Bitcoin Price Dips 4% After Israeli Strikes on Iran Spark Global Selloff

June 13, 2025
WPP races to harness AI before the technology kills its business

WPP races to harness AI before the technology kills its business

June 11, 2025
BusinessPostCorner.com

BusinessPostCorner.com is an online news portal that aims to share the latest news about following topics: Accounting, Tax, Business, Finance, Crypto, Management, Human resources and Marketing. Feel free to get in touch with us!

Recent News

Oil prices rebound after Trump’s call for Tehran evacuation

Oil prices rebound after Trump’s call for Tehran evacuation

June 17, 2025
WhatsApp introduces ads, fulfilling a plan its cofounders hated so much they left over it

WhatsApp introduces ads, fulfilling a plan its cofounders hated so much they left over it

June 17, 2025

Our Newsletter!

Loading
  • Contact Us
  • Privacy Policy
  • Terms of Use
  • DMCA

© 2023 businesspostcorner.com - All Rights Reserved!

No Result
View All Result
  • Home
  • Business
  • Finance
  • Accounting
  • Tax
  • Management
  • Marketing
  • Crypto News
  • Human Resources

© 2023 businesspostcorner.com - All Rights Reserved!