OpenAI is rolling out long-awaited ‘advanced voice’ feature
Artificial intelligence firm OpenAI has begun rolling out its long-awaited “Advanced Voice” feature for select ChatGPT users.
“Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week,” OpenAI said in a Sept. 24 post to X.
“It can also say ‘Sorry I’m late’ in over 50 languages,” a nod to the delay in releasing the voice feature, which was originally planned to be released earlier in the year.
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
— OpenAI (@OpenAI) September 24, 2024
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
It can also say “Sorry I’m late” in over 50 languages. pic.twitter.com/APOqqhXtDg
Advanced Voice Mode is an upgrade to ChatGPT’s latest 4.0 model . It allows for faster and more intuitive communication with the model and includes several more humanlike conversational improvements.
As part of the new feature, OpenAI unveiled five new voices, Arbor, Maple, SXol, Spruce, and Vale, which come as additions to the existing Breeze, Juniper, Cove, and Ember voice options.
Users of the ChatGPT Plus and Team tiers will gain staggered access to the new voices, which are designed to make conversations more human-like, including allowing users to interrupt the conversation and switch topics mid-conversation.
OpenAI also includes instructions and “memories” when the new voices are released. Users can put in custom instructions that tailor the chatbot to their preferences, and the chatbot can learn from and “remember” important things from previous audio conversations.
It doesn’t always work as intended, though. In an FAQ, OpenAI admitted that the conversation experience isn’t yet optimized for use with an in-car Bluetooth or speakerphone, and ChatGPT could be interrupted by noises.
Related: OpenAI’s current business model is ‘untenable’ — Report
Still, some users on X complained that the new voice options paled in comparison to Altman’s controversial once-planned “Sky” voice model — which was taken off the table after a heated legal dispute with actress Scarlett Johansson.
Johansson said Altman approached her in 2023 to be the voice of ChatGPT, but she decided to decline for “personal reasons.” Following the release of Sky as the voice of GTP 4.0, she said she was “shocked, angered, and in disbelief” over the chatbot’s eerily similar vocal features.
Altman later scrapped the voice and has maintained that any resemblance was purely by chance, despite having made a single-word post to X that read “Her” — a direct reference to the 2013 Spike Jonze film where Johansson voiced the operating system of an intelligent AI companion.
AI Eye: AI drone ‘hellscape’ plan for Taiwan, LLMs too dumb to destroy humanity
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Genius Group to Make Bitcoin the Main Treasury Asset and Buy $120 Million in BTC
Cryptocurrency exchange Revolut X is now available to users from 30 European countries
Sberbank has announced when the first settlements in cryptocurrency will be carried out
DeFi Platform DeltaPrime Demands Hacker to Return Stolen $4,8 Million