Apple joins Google, Microsoft and more in committing to new AI safety guidelines: What it means for you

Tim Cook superimposed over the new logo for Siri introduced at WWDC 2024
(Image credit: Apple)

This weekend, Apple joined Google, Microsoft, Meta, OpenAI, and other leading AI companies in voluntarily agreeing to new regulations that will ensure AI tech is safe, secure, and private for users. 

The new safeguards stem from an executive order the Biden administration issued in October 2023 titled "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." This executive order outlined 270 actions the administration aimed to implement to address safety and privacy concerns surrounding AI. 

The news comes as Apple is gearing up to launch its new Apple Intelligence AI platform this fall. 

Apple agreeing to these AI safety regulations is good news for users, too. Here's why. 

Apple joins AI leaders in agreeing to new AI safeguards

Apple Store

(Image credit: Apple)

Apple is now the 16th major tech company to voluntarily agree to a new set of AI safety regulations designed to protect users' safety and privacy. 

The Biden administration announced Apple's move in a press release on July 26. According to the press release, the original 270 goals in the October 2023 executive order have all been completed, as well. 

The press release notes, "Today, the administration announced that Apple has signed onto the voluntary [AI safety] commitments, further cementing these commitments as cornerstones of responsible AI innovation."

Other companies that have already agreed to the regulations include Microsoft, Google, Amazon, and OpenAI. 

This set of voluntary AI safeguards is one of the first pieces of legislation in the U.S. to establish regulations for the AI industry. It comes amidst growing concerns about data privacy and the safety of new AI tools, which have boomed over the past few years. In August 2023, the European Union launched similar legislation to address AI's safety and privacy risks. 

How these new AI safeguards protect users

Google logo in style of TikTok logo with pizza slice

(Image credit: Laptop Mag / Rael Hornby)

Apple joining Microsoft, Google, OpenAI, and others in signing onto AI safety regulations has a direct impact on users. It signals that these companies are willing to be held accountable for the safety of their AI products.  

That's important to note since serious concerns about security and privacy in AI have been swirling for months now, involving everything from copyright concerns to data privacy. 

AI safety regulations will help curb these risks and concerns by holding AI developers accountable. 

For example, the new AI safeguards in the U.S. include guidelines for developing AI tools for education that are safe for students. The safeguards also include guidelines from the AI Safety Institute to help developers protect their AI products from misuse. There are even guidelines for the impact of AI on patents and copyright laws.

This means that AI tools from companies complying with the new regulations will be safer due to new standards for minimum protection against risks like data theft or the use of AI for generating inappropriate or harmful content. 

Improved safety measures can even help prevent AI tools from giving users unsafe or inaccurate advice, such as the "glue pizza" incident involving Google's AI Search tool earlier this year. 

"Hallucinations" like this can be humorous, but they can also mislead unsuspecting readers, such as children, who may not realize they're reading inaccurate information. So, safeguards to minimize those hallucinations not only improve safety but also result in better performance for all users. 

Why AI safety regulations are crucial

These safety guidelines may be arriving at just the right time. Amidst the AI boom over the past few years, industry leaders are facing growing concerns about safety and privacy.

OpenAI has been embroiled in controversy after several leading members of its safety and ethics team quit earlier this year. Several former OpenAI employees released an open letter in June calling for greater safety and transparency from AI companies.

Similarly, one former OpenAI employee, Jan Leike, spoke out about his experience and concerns on X following his resignation back in May, commenting, "Over the past years, safety culture and processes have taken a backseat to shiny products."

Over the past years, safety culture and processes have taken a backseat to shiny products.

Jan Leike, former OpenAI employee

These calls for a greater emphasis on safety echo privacy concerns surrounding generative AI, as well. For instance, ever since the DALL-E image generator launched in 2022, there have been legal and privacy concerns about its potential impact on copyright infringement.

Meanwhile, individual users may be risking their personal data privacy when they use AI tools, some of which can collect user data for training purposes. Users have also had to learn how to spot AI-generated content in the news, which poses a serious risk of increasing the spread of misleading information.

By signing onto the new AI safeguards, Apple is signaling that its Apple Intelligence platform will respect users' privacy. This could be part of why Apple Intelligence's launch is delayed, but that may be a small price to pay for a safer, more secure set of AI features.

More from Laptop Mag

Category
Arrow
Arrow
Back to Apple MacBook Pro
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Screen Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 478 deals
Filters
Arrow
Load more deals
Stevie Bonifield
Freelance Writer

Stevie Bonifield is a freelance tech journalist specializing in keyboards, peripherals, gaming gear, and mobile tech. Outside of writing, Stevie loves indie games, photography, and building way too many custom keyboards