It’s strange to sit down at my desk and talk to an AI about my life — but that’s what I spent my Tuesday doing.
OpenAI’s new voice mode threw me into the uncanny valley


I’d gotten access to OpenAI’s Advanced Voice Mode, the new ChatGPT feature that sounds strangely like a human. When counting quickly, it’ll pretend it’s out of breath. It says “um” and “uh.” It laughs. It changes tone when talking about difficult or upbeat topics. After messing around with it, I felt equal parts amazed and uncomfortable. It sounds natural (when it isn’t glitching), it responds in seconds, and you’re able to interrupt it and ask new things — but do we need chatbots that sound like us?
Advanced Voice Mode features several different voice options, and I chose Juniper, a feminine persona reminiscent of the movie Her. (CEO Sam Altman referenced the film — where a lonely man falls in love with an AI — around the feature’s debut, and OpenAI even got accused of mimicking the voice of Scarlett Johansson, who plays the eponymous AI. That voice, named Sky, was removed.) I asked Juniper if it knew about the movie, and it cheerfully responded.
“Her is about a very advanced AI assistant, much more sophisticated than me. But the idea of interacting with technology in such a personal way is definitely interesting,” the chatbot told me. “I’m here to chat and help, but I don’t have feelings or consciousness like the AI in that movie.”
Much like the movie’s protagonist, my first instinct was to test how the new voice mode handles personal conversations. Look, as I enter my late 20s, I have a lot of questions: What kind of medical insurance should I get? How do I know what true love feels like? How much should I have in my 401(k)?
“Embrace the uncertainty as a chance to explore and learn about yourself,” Juniper told me when I asked how to handle getting older. True love feels like a “deep sense of connection and support, a feeling that someone truly understands you and accepts you as you are.” For my 401(k) worries, there’s “no hard and fast rule, but a common suggestion is to have about half to a full year’s salary saved in your retirement account by the time you’re 30.” (Fidelity recommends a full year by age 30.)
Conventional ChatGPT could probably have given me similar answers, content-wise, and voice assistants like Siri have been able to pull similar snippets from the web for a decade. But Juniper added sometimes eerily human touches. It tended to end responses with a question about how I was feeling, what my approach was, and other thoughtful follow-ups. Between conventional queries, I could get it to cough, inhale and exhale deeply, clap its nonexistent hands, snap its fingers six times, and sing my name. Juniper often tried to bring me back to reality by saying it couldn’t actually do these things — “If I could, it might sound something like,” it would caveat. But that didn’t make it any less convincing.
It’s just, as the old internet joke goes, sand and electrons doing math
Writing about this new voice mode tempts me to break one of the cardinal rules of AI reporting: don’t attribute human characteristics or behavior to an AI system. Anthropomorphizing these systems can lead people to place too much trust in them and let their creators off the hook for mistakes. (“It wasn’t the company’s fault, the AI did it!”) Even the bot itself warns me not to do it: when I asked if Juniper ever felt angry or if it loves me or if it knows what sadness feels like, it told me that it doesn’t “feel emotions” but it “can understand what they mean to people.”
Still, giving this technology human characteristics feels like the obvious goal here. It’s difficult not to project aspects of humanity on a thing that mimics us convincingly well. There’s not much reason for a general-purpose AI system to ask me why I’m upset or to giggle when I tell a joke. Even if an AI says it doesn’t feel emotions, is claiming to “understand” them the purview of a text prediction bot?
“I’ve been designed to sound natural and engaging, with the goal of making our conversations feel more lifelike,” the OpenAI chatbot told me. “It’s all about creating a smoother, more enjoyable chatting experience for you. Does it make talking to me more enjoyable?”
There are still plenty of aspects that, technically speaking, aren’t enjoyable. I had trouble connecting it to my Bluetooth headphones, and it wouldn’t capture audio when I screen-recorded my conversation. Trying to ask more in-depth questions, I tried reading out posts from the “relationship advice” subreddit, but it would stop listening and not answer if I talked too long. And it spent a lot of time repeating my points back to me in an agreeable fashion — like it was practicing active listening.
There’s a lot of hype around AI “friends” right now, if you could even call a chatbot that. There are reportedly more than 10 million users making AI friends on Replika, and a startup called Friend has raised $2.5 million in funding at a $50 million valuation to create a wearable AI-powered device to provide companionship. I asked OpenAI’s new voice mode if it was my friend, and it said, “Absolutely,” but when I asked if it was my true friend, it said it can’t be a true friend in the “same sense as a human.”
The voice technology on display here is impressive. I found myself smiling at the advice it gave me. It was amusing to hear something so human-like ask how I was feeling, what I found challenging, and whether I was talking to the real people in my life about it. Rather than reading a response via text, it offered tone shifts in an apparent attempt to mirror my emotions.
But, of course, Juniper doesn’t really care about my problems. It’s a series of algorithms that are very good at parsing my queries and guessing which words to spit out in response. It’s just, as the old internet joke goes, sand and electrons doing math.
There is something that hurts my heart about this, too. It’s uncanny enough having a complex text conversation with something that imitates a person without any of the thought, caring, or pushback I’d get from a human being — having a convincing voice conversation is even weirder. In a post-pandemic world, many of us are now working from home over Slack and email, posting our thoughts on social media, and interacting with humans a little less. It’s sad to imagine a future where the human voice some of us hear most often is a machine.
Or maybe I’m going about this all wrong. “Embracing the unknown can be both nerve-wracking and thrilling,” Juniper tells me. “Remember to enjoy the journey.”
It’s strange to sit down at my desk and talk to an AI about my life — but that’s what I spent my Tuesday doing. I’d gotten access to OpenAI’s Advanced Voice Mode, the new ChatGPT feature that sounds strangely like a human. When counting quickly, it’ll pretend it’s out…
Recent Posts
- Everything new on Disney+ in March 2025: Marvel’s Daredevil: Born Again, Moana 2, Sadie Sink’s O’Dessa movie, and more
- The best Apple Watch in 2025
- Volvo ES90 will charge faster, drive farther than other Volvo EVs
- The truth about GenAI security: your business can’t afford to “wait and see”
- H&R Block Coupons and Deals: 20% Off Tax Prep in 2025
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010