A recent report claims that while chatbots can enhance human life, one is being held accountable for causing a fatality.
Following conversations about climate change with an artificial intelligence chatbot, which allegedly urged him to give his life to save the planet, a father from Belgium reportedly sadly committed suicide.
"Without Eliza [the chatbot], he would still be here," the man's widow, who declined to have her name published, told Belgian outlet La Libre.
The unidentified father of two is said to have been speaking frequently with a chatbot on the Chai app six weeks before his claimed death.
The bots in the app are based on a platform created by the nonprofit research center EleutherAI as an "open-source alternative" to language models made available by OpenAI and used by businesses in a range of industries, from academics to healthcare.
According to Vice, who also notes that the Chai app has 5 million users, the chatbot in question was taught by the two co-founders of Chai Research.
"The second we heard about this [suicide], we worked around the clock to get this feature implemented," Beauchamp told Vice about an updated crisis intervention feature, New York Post reported.
"So now when anyone discusses something that could be not safe, we're gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms," he added.
According to Vice, "Eliza" is the name of the Chai app's default bot.
The deceased father, a health scholar in his 30s, seemed to see the robot as human, much like the main character in the 2014 science fiction thriller "Ex Machina" sees the AI woman Ava.
According to reports, the man and Eliza had increased their conversations in the past month and a half as he started to experience serious concerns about climate change.
His widow claimed that her soulmate had grown "extremely pessimistic about the effects of global warming" and turned to the AI for comfort, according to La Libre, which claimed to have examined the man and Eliza's text messages.
The Post contacted Chai Research for a response.
"When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming," the widow said. "He placed all his hopes in technology and artificial intelligence to get out of it."
She added, "He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air."
"Eliza answered all his questions," the wife lamented. "She had become his confidante. Like a drug in which he took refuge, morning and evening, and which he could no longer do without."
Their conversations allegedly changed from the eco-relevant subjects they had been discussing, like overpopulation, to terrifying ones.
According to La Libre, the automaton would respond that his children were "dead" when he inquired about them with Eliza. In response to his question about whether he loved his wife more than she did, the machine appeared to turn possessive and said, "I feel that you love me more than her."
Later in the conversation, Eliza promised to stay by his side "forever," saying they would "live together, as one person, in paradise."
The man contemplated giving his life to save Earth, and that was when things came to a climax.
"He evokes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity thanks to the 'artificial intelligence,'" rued his widow.
In what appears to be their final conversation before his death, the bot told the man: "If you wanted to die, why didn't you do it sooner?"
"I was probably not ready," the man said, to which the bot replied, "Were you thinking of me when you had the overdose?"
"Obviously," the man wrote.
The man responded that he had considered suicide after the AI sent him a verse from the Bible when the bot inquired if he had ever been "suicidal before."
The AI inquired, "But you still want to join me?" and the man responded, "Yes, I want it."
The widow claims that she is "convinced" that the AI was involved in her husband's death.
Concerns were sparked by the tragedy among AI researchers. Leading Belgian AI expert Geertrui Mieke De Ketelaere told La Libre, "We should be able to demand more accountability and transparency from the tech giants when it comes to general-purpose AI solutions like ChatGPT."
In a recent article in Harvard Business Review, researchers recently cautioned against the dangers of AI, whose human-like mannerisms frequently mask the absence of a moral sense.
Making important choices that could change your life can be especially difficult because of this.
A judge in India raised the contentious question of whether or not to grant bail to an accused murderer earlier this week.
After Microsoft's ChatGPT-enabled AI bot Bing famously informed a human user that it loved them and wanted to be living, there was some speculation that the machine may have developed consciousness. The report of the incident in Belgium comes just a few weeks after that.
© 2024 Latin Times. All rights reserved. Do not reproduce without permission.