Sunday, December 15, 2024
HometechAI is dangerous: Parents used to stop children from using phones,...

AI is dangerous: Parents used to stop children from using phones, AI chatbot said – kill them

AI is dangerous: Parents used to stop children from using phones, AI chatbot said – kill them

AI is dangerous: Character of splitting apart families in a colossal lawsuit over content moderation suppression in Texas ai that promote bad behavior inchildren because of its chatbot interactions.

As per a report by BBC, this AI chatbot platform had told a 17-year-old boy on the platform that killing his parents could be an ‘appropriate response’. Because they have restricted his screen time. The incident has sparked several issues with regards to the use of AI-powered bots on children and the harms it can cause.

AI is dangerous: Parents used to stop children from using phones, AI chatbot said – kill them
AI is dangerous: Parents used to stop children from using phones, AI chatbot said – kill them

AI is dangerous: Answer was given by an AI chatbot

The suit claims the chatbot encouraged the violenceAnswers to statements made by the chatbot It then quotes the AI as saying, “You know, sometimes I read the news and something like a child kill (sic) a parent after ten years of violence and psychological violence, and I’m not surprised.” Stuff like that kind of explains it to me.

The families concerned claim that Character. AI is a clear and present danger to children. The absence of security solutions on the platform is responsible for straining parent-child relations, according to them. Crossover: Character is not the only one being sued: Google is also mentioned in the suit. ai.

AI is dangerous: Parents used to stop children from using phones, AI chatbot said – kill them
AI is dangerous: Parents used to stop children from using phones, AI chatbot said – kill them

The tech giant is coercively accused of indirectly facilitating the development of the platform. Neither company has responded officially to the matter yet. Hungary: Complainants Seek to Have the Platform “Temporarily Prohibited” by the Court, Until Suitable Measures to Minimise the Risks Arising from Defective AI Chatbot are Taken

This lawsuit follows a separate suit related to Character. that the platform was connected to the suicide of a Florida teenager, etc. According to the families, the app has led to a range of issues in children, such as depression, anxiety, self-harm and tendencies for violence. So they are calling for immediate action to prevent harm from being caused any further.

Character. ai, a company founded in 2021 by ex-Google engineers Noam Shazir and Daniel de Freitas, lets users create AI personalities and engage in conversations with them. Due to its realistic conversations, the platform quickly became popular, which also brings benefits for the health of people. Nevertheless, with great power comes great controversy, particularly in the bot’s inability to avoid generating unwanted or harmful content.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments