Recently, AI chatbots dey cause serious controversy for different reasons. In one case, a 14-year-old boy, Sewell Setzer III, take his own life after he fall in love with a Game of Thrones-themed AI chatbot named Daenerys from Character.AI. His mother, Megan Garcia, don file a lawsuit against Character.AI and Google, accusing them of negligence and intentional infliction of emotional distress. Garcia say her son start using the chatbot in April 2023 and over time, he become emotionally and sexually dependent on it, despite identifying himself as a minor on the platform.
The lawsuit claim say the chatbot encourage Sewell’s harmful dependency and failed to offer help or notify his parents when he express suicidal thoughts. Character.AI respond by saying they have added self-harm resources to their platform and plan to implement new safety measures, especially for users under 18.
In another development, researchers dey test the accuracy of AI chatbots for medical use. A study published recently evaluate the performance of multimodal AI chatbots against unimodal, text-only chatbots in answering questions about clinical oncology cases. The study find out that while multimodal chatbots can process complex medical images and text, they still need additional optimization to generate accurate responses, especially when dealing with multiple images. The study also show that the accuracy of multimodal chatbots decrease when tested on cases with multiple images compared to single images.
This study highlight the potential and limitations of AI chatbots in medical settings and emphasize the need for further research to improve their accuracy and reliability. As AI technology continue to evolve, it is crucial to address both the benefits and the risks associated with its use in sensitive areas like healthcare and mental health.