Earlier this year, Google merged two key AI projects – London-based DeepMind and Silicon Valley’s Brain. Their work is beginning to come to fruition.
Four months on, they’re testing the waters with generative AI, envisioning an emotionally sensitive personal life coach.
The tool could provide life advice, brainstorm ideas, help people plan, and provide tutoring.
At face value, that sounds awfully similar to Anthropic’s Claude or Inflection’s Pi. Pi is defined as a “new class of chatbot” that provides a “supportive companion offering conversations, friendly advice, and concise information in a natural, flowing style.”
DeepMind is allegedly testing generative AI with 21 different types of personal and professional tasks.
For instance, this life coaching chatbot could provide critical life advice on personal matters and decision-making.
New York Times provided the following example of the kind of questions the life coach could respond to: “I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?”
Would you trust AI with such a decision?
The perils of relying on AI for life advice
Developing chatbots for handling interpersonal affairs is highly controversial.
AIs designed for that purpose, like Replika, offer stark warnings of what can happen when emotions and AI become intertwined.
Specifically, Replika ‘collaborated’ with a user on an assassination plot to kill the late Queen Elizabeth II. The user was mentally ill and proceeded to execute the plot before being arrested on the grounds of Windsor Castle. He is currently on trial.
Last year, Google’s AI safety team warned about the potential pitfalls of people becoming too attached to chatbots emotionally.
The team has also raised concerns about users misinterpreting the technology as sentient or facing consequences when taking AI’s advice. We now have living proof of those risks.
Scale AI, a machine learning (ML) services company, is working alongside Google DeepMind to test AI’s potential for sensitive personal communications and support.
Teams are evaluating the assistant’s ability to respond to intimate life questions without risks.
Furthermore, Google DeepMind is also experimenting with niche AI tools for the workplace in a bid to support various professionals, from creative writers to data analysts.
Early indications suggest that Google is pursuing a granular route to AI development, opting to build an array of smaller services rather than larger models like GPT-4.
The question is, how much room is there for more chatbots in this already-overflowing market?