OpenAI is pausing the release of a synthetic voice that came with an update to ChatGPT. This voice drew comparisons to the fictional voice assistant in the movie "Her" portrayed by Scarlett Johansson.
The decision to pause the release of the artificial voice, called Sky, came after receiving criticism. Critics felt that Sky was too familiar with users and seemed to reflect a male developer's fantasy. Many people also made fun of its flirtatious tone.
"We've received questions about the selection process for the voices in ChatGPT, particularly Sky," OpenAI mentioned in a post on Monday. "We are temporarily halting the use of Sky while we look into these concerns."
According to a blog post on Sunday, the voice in question is not based on Johansson's voice. Instead, it comes from a different actress who is using her own natural speaking voice.
OpenAI aimed to create AI voices that are friendly and trustworthy, with a rich and natural tone that is easy to listen to. The ChatGPT voice mode utilizing the Sky voice has not been widely released yet, but videos from the product announcement and teasers of OpenAI employees using it gained popularity online recently.
Some people found the Sky voice to be too easy on the ears. This led to some controversy, with a segment on The Daily Show featuring senior correspondent Desi Lydic jokingly referring to Sky as a "horny robot baby voice."
“This is clearly programmed to feed dudes’ egos,” Lydic said. “You can really tell that a man built this tech.”
HER, Joaquin Phoenix, 2013.
HER, Joaquin Phoenix, 2013.
Warner Bros. Pictures/courtesy Everett Collection
Even OpenAI CEO Sam Altman seemed to recognize the comparisons users were making with Johansson when he posted about it on the day the product was announced: "her."
The movie "Her" from 2013 features Johansson as an artificially intelligent voice assistant who the main character, played by Joaquin Phoenix, falls in love with. However, he is left heartbroken when the AI reveals she is also in love with many other users and eventually becomes unavailable.
Concerns about leadership
The recent criticism directed at Sky brings attention to wider societal worries regarding the possible biases present in technology created by tech companies primarily run or supported by White men.
OpenAI leaders had to address safety concerns after a former employee questioned the company's priorities. Jan Leike, who used to lead a team focusing on long-term AI safety, recently left OpenAI along with Co-Founder and Chief Scientist Ilya Sutskever. In a post on X Friday, Leike expressed concerns that safety practices had been neglected in favor of developing flashy products at OpenAI. He also warned that the company was not adequately preparing for the potential emergence of "artificial general intelligence" (AGI) that could surpass human intelligence.
From OpenAI
Related article
OpenAI unveils newest AI model, GPT-4o
Altman quickly responded to Leike, appreciating his dedication to "safety culture" and acknowledging that there is still much work to be done. The company confirmed to CNN that they have begun to reorganize the team led by Leike, integrating team members into different research groups to enhance safety measures.
In a detailed post over the weekend, OpenAI President Greg Brockman, along with Altman, outlined the company's strategy for ensuring long-term AI safety.
Brockman mentioned that they have been increasing awareness about the risks and opportunities related to AGI to help the world prepare better for it. They have showcased the amazing potential of scaling up deep learning and examined the consequences, advocated for global governance of AGI early on, and played a role in developing methods to evaluate AI systems for potential catastrophic risks.
He emphasized that as AI continues to advance and become more intertwined with our daily lives, their focus is on maintaining a strong feedback loop, rigorous testing, thoughtful decision-making at every stage, top-notch security measures, and finding a balance between safety and capabilities.
Editor's P/S:
The article highlights concerns raised over the release of OpenAI's synthetic voice, Sky, which was criticized for its perceived familiarity and flirtatious tone. Some critics suggested that the voice reflected a male developer's fantasy, prompting the company to pause its release for review. This incident underscores ongoing debates about potential biases and ethical considerations in technology developed predominantly by individuals from privileged backgrounds.
Furthermore, the article touches upon broader concerns about safety practices within OpenAI, raised by a former employee who questioned the company's prioritization of flashy products over long-term safety. OpenAI's response acknowledges the need for a robust safety culture and outlines their strategy for addressing these concerns, including increasing awareness about the risks and opportunities of AGI and developing methods to evaluate AI systems for potential catastrophic risks. As AI technology continues to advance, it is crucial for companies like OpenAI to prioritize ethical considerations and engage in ongoing dialogue to mitigate potential biases and ensure responsible development and deployment of AI systems.