Google, Microsoft and others agree to voluntary AI safety action

Seven major American artificial intelligence companies including Google and Microsoft have promised that new AI systems will go through outside testing before they are publicly released, and that they will clearly label AI-generated content, U.S. President Joe Biden announced Friday.

“These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security, and trust,” Biden told reporters.

— the companies have an obligation to make sure their technology is safe before releasing it to the public, Biden said.  “That means testing the capabilities of their systems, assessing their potential risk, and making the results of these assessments public;

— companies promised to prioritize the security of their systems by safeguarding their models against cyber threats and managing the risks to U.S. national security, and also sharing the best practices and industry standards;

— companies agreed they have a duty to earn the people’s trust and empower users to make informed decisions — labeling content that has been altered or AI-generated, rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm;

— companies have agreed to find ways for AI to help meet society’s greatest challenges — from cancer to climate change — and invest in education and new jobs to help students and workers prosper from the enormous opportunities of AI.

Those companies agreeing also include Amazon, Meta, OpenAI, Anthropic and Inflection.

These voluntary commitments are only a first step in developing and enforcing binding obligations to be adopted by Congress. Realizing the promise and minimizing the risk of AI will require new laws, rules, oversight, and enforcement, a White House background paper says. The administration will continue to take executive action and pursue bipartisan legislation to help America lead the way in responsible innovation and protection.

“As we advance this agenda at home, we will work with allies and partners on a strong international code of conduct to govern the development and use of AI worldwide,” the statement adds.

The agreement says the companies making this commitment recognize that AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming. They commit to establishing bounty systems, contests, or prizes to incent the responsible disclosure of weaknesses, such as unsafe behaviors, for systems within scope, or to include AI systems in their existing bug bounty programs.

There was some skepticism after the announcement. PBS quoted James Steyer, founder and CEO of the nonprofit Common Sense Media, who said, “History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer. Former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, Howard has written for several of ITWC's sister publications, including ITBusiness.ca. Before arriving at ITWC he served as a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs