A timeline of OpenAI controversy: November 2023 to May 2024

Will controversy around Sam Altman be the downfall of OpenAI?
Will controversy around Sam Altman be the downfall of OpenAI? (Image credit: OpenAI)

On May 13, the world met GPT-4o, the latest version of OpenAI's large language model. Updates to the algorithm behind ChatGPT usually draw attention on social media, but this one was different. 

After OpenAI released a demo of the new "Sky" voice featured on GPT-4o, actress Scarlett Johansson accused OpenAI of using her voice after she explicitly said not to.

It's just the latest in a string of crises for OpenAI over the past several months. From CEO Sam Altman's firing and immediate coup to the exit of top safety experts, controversy has rained down on OpenAI amidst an explosion in the AI industry it helped to jump-start. Altman has become the eye of the storm. 

Recent issues at OpenAI, particularly with Johansson, have observers now openly asking if Altman is becoming a tech villain

Is Altman taking too many risks in his pursuit of artificial general intelligence? The answer might lie in Altman's past, particularly how he went from leading a victorious coup to trading blows with a Hollywood A-lister. 

The ominous firing and return of Sam Altman

Sam Altman of OpenAI

(Image credit: Justin Sullivan/Getty Images)

Altman co-founded OpenAI in 2015 along with Elon Musk and several other technology leaders. OpenAI became a household name in late 2022 with the introduction of ChatGPT. 

Altman quickly rose to prominence, garnering praise and awards from the tech industry. For instance, Tech Radar enthusiastically highlighted "6 exciting ways to use ChatGPT – from coding to poetry." People all over the world quickly found dozens, even hundreds of exciting uses for OpenAI's groundbreaking algorithm. 

In the year that followed, the tech world quickly tried to play catch-up with OpenAI. Then, in a move that shocked the business and tech worlds for its swiftness, the board of OpenAI fired Altman as CEO in November 2023. What followed was a chain of events that could have been right out of Succession

  • November 17, 2023: OpenAI board of directors fires Altman as CEO. Shortly after, OpenAI's employees submit a letter to the board threatening to quit en masse if Altman is not restored as CEO. 
  • November 20, 2023: Microsoft CEO Satya Nadella announces Altman will be joining AI leadership at Microsoft
  • November 21, 2023: Virtually the entire board of OpenAI resigns, and a new board is formed, with Altman returning as CEO
  • May 13, 2024: GPT-4o launches, featuring "Sky" voice in a demo
  • May 14, 2024: OpenAI chief scientist Ilya Sutskever announces his exit
  • May 16, 2024: OpenAI Head of Alignment Jan Leike resigns, citing safety concerns
  • May 20, 2024: Johansson releases a statement to The New York Times revealing she refused to allow OpenAI to use her voice for "Sky"
  • May 20, 2024: Sam Altman releases a statement insisting that "Sky" is not the voice of Johansson and that the voice "was never intended to resemble hers"
  • May 28, 2024: Former OpenAI board member Helen Toner reveals in an interview that Altman was "withholding information" from board members

At first glance, Altman's ousting and rapid return as OpenAI CEO might not seem directly connected to his more recent controversy with Johansson. However, the reasons behind the board's move to fire Altman were a prelude to this month's issues with GPT-4o. 

This week, former OpenAI board member Helen Toner shed light on the board's decision to fire Altman back in November. In an interview on The TED AI Show, Toner claimed Altman withheld information from the board, gave them inaccurate information, and even outright lied. Toner specifically cited a lack of transparency around safety. 

Toner explained, "On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning it was basically impossible for the board to know how well those safety processes were working."

On multiple occasions, [Altman] gave us inaccurate information about the small number of formal safety processes

Helen Toner, former OpenAI board member

This is important to note since the board that attempted to fire Altman, only to end up resigning itself, was formed as a non-profit. That means their job was to ensure that OpenAI was focused on public good rather than profit. 

Firing Altman wasn't just about power or business strategy. For this board, it was about concerns over transparency and the potential harm OpenAI's products could do. So, the departure of that original board marked a clear turning point for the company, with Altman at the heart of it.

However, at the time, Altman was widely seen as the hero of the story thanks to overwhelming support from OpenAI's employees, nearly all of whom threatened to quit if the board refused to reinstate Altman as CEO. That heroic trajectory didn't last long. 

Sam Altman's clash with Scarlett Johansson, explained

Sam Altman's latest controversy involves actress Scarlett Johansson

(Image credit: Getty Images/OpenAI)

OpenAI unveiled its latest advancement, GPT-4o on May 13, 2024. It's the company's next step toward its ultimate goal of creating artificial general intelligence, or AGI, a type of AI that can match or even surpass human intelligence. Within days of launching GPT-4o, Sam Altman was at the center of what is arguably OpenAI's most scandalous controversy yet: The company faced accusations that it copied Johansson's voice against her wishes. 

According to Johansson, Altman contacted her in September 2023, asking if she would be the voice of ChatGPT 4.0. Johansson declined the offer. Nine months later, Altman contacted her again just two days before the release of GPT-4o, asking if she would reconsider her decision. She refused again and reported being shocked when she heard the voice of "Sky" in the GPT-4o demo. It sounded uncannily similar to her own voice. 

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference," she said in a statement sent to The New York Times.

Altman refuted Johansson's claims that he used her voice for Sky. According to his version of the story, OpenAI held a lengthy audition process and selected a different voice actress after Johansson declined the role. That voice actress is still anonymous at the time of writing, making it difficult to confirm OpenAI's story. 

Altman only made the situation worse when he posted "her" on X just before GPT-4o's release. Her is the title of a 2013 Johansson film where she voices an AI assistant who becomes the protagonist's love interest. Other X users were quick to make the connection, replying to Altman's post with images and gifs from the film. 

This reference just makes Altman's interactions with Johansson all the more suspicious. Considering his reference to Her and repeated pleas for Johansson's voice, it is clear Altman had his heart set on having the actress voice ChatGPT. Even if he didn't copy her voice without her permission, it seems like an uncanny coincidence that the alternative voice actress OpenAI selected sounds indistinguishable from Johansson. 

To make matters worse, two prominent figures at OpenAI and long-time colleagues of Altman left the company in the wake of GPT-4o's release. 

The quiet downfall of OpenAI's "Superalignment" team

OpenAI chief scientist Ilya Sutskever and Head of Alignment Jan Leike left the company just two days apart the week of GPT-4o's launch. Their exits are particularly important because both departures may be connected to safety concerns. 

Sutskever was directly involved in the move to oust Altman as CEO, and he and Leike were both leaders on OpenAI's "Superalignment" team that is responsible for ensuring that the AI models the company develops are designed with humanity's best interests at heart. 

It is effectively the team in charge of identifying applications for AI that could have ethical and safety risks, like the use of deep fakes for misinformation and identity theft.  

Leike was clear about his concerns on social media. He explained his exit in a thread on X: "Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity." 

Leike also mentioned that his team was "sailing against the wind" recently and directly called out a dangerous cultural shift at OpenAI, claiming, "over the past years, safety culture and processes have taken a backseat to shiny products." 

Against the backdrop of Toner and Johansson's accusations, a darker picture is forming for OpenAI and Altman — or at least one less bright than in November 2022 when Chat GPT debuted. The company aims to develop AGI, but Leike's claims indicate that Altman may be pursuing that goal at the expense of unresolved ethical concerns surrounding AI. 

The Superalignment team at OpenAI appears to be sounding the alarm on a worrying trajectory for the company. It's all the more concerning in light of Johannson's claims that she repeatedly rejected Altman's requests to use her voice. 

If Altman is willing to push legal boundaries with a famed Hollywood actress, what other lines would he cross? What lines has he perhaps crossed that motivated the head of OpenAI's safety and ethics team to leave

Is this the person we want at the helm when AGI is achieved?

OpenAI logo on green background with slight video degredation

(Image credit: OpenAI)

It's hard to say exactly what may be happening behind the scenes at OpenAI, but giving Altman the benefit of the doubt is becoming increasingly difficult. Even if Leike's implications about the company's position (or lack thereof) on safety were not in reference to Altman, the CEO may be attracting more negative publicity than OpenAI can handle. 

Over the past few years, consumers have grown increasingly skeptical about AI, particularly creatives, who are at the highest risk of being "replaced" by generative AI. No company seems to be immune to this tension, either. 

For instance, Apple faced harsh backlash earlier this month after releasing an ad for its new iPad Pro that included a collection of instruments, tools, and artwork being physically crushed. 

Meanwhile, Google launched a new AI Overview feature on its Search platform, which was immediately caught giving dangerous and nonsensical results (that were rapidly shared all over social media). 

This was arguably the worst possible time for Altman to get OpenAI in the crosshairs of Johansson. While he did reluctantly take down Sky, Altman's interactions with Johansson thus far don't make him look particularly professional and even bring his morals into question. Regardless of his intentions, the situation makes OpenAI look like it attempted to exploit a widely admired artist and play dumb when it was caught in the act. 

For OpenAI to survive the current climate of skepticism surrounding AI, it needs someone genuine and trustworthy at the helm. Altman's visionary, idealistic nature might have fit that description once, but that may not be the case much longer. 

The question is, is it too late for OpenAI to heed Toner and Leike's warnings about safety as it continues to barrel toward artificial general intelligence? Even more concerning, will Sam Altman release the first AGI into the world before addressing growing fears around the safety and ethics of these algorithms? 

More from Laptop Mag

Category
Arrow
Arrow
Back to Apple MacBook Pro
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Screen Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 285 deals
Filters
Arrow
Load more deals
Stevie Bonifield
Freelance Writer

Stevie Bonifield is a freelance tech journalist specializing in keyboards, peripherals, gaming gear, and mobile tech. Outside of writing, Stevie loves indie games, photography, and building way too many custom keyboards