Parental Controls Coming to ChatGPT After Legal Action – Bundlezy

Parental Controls Coming to ChatGPT After Legal Action

Millions of people use ChatGPT every day for a variety of reasons. Some are looking for assistance with creative writing, while others are attempting to organize schedules or help with their businesses.

OpenAI’s models have shown that generative AI and LLMs can have practical uses, but critics are concerned about the ethical and environmental issues that could arise with the continued use of AI.

ChatGPT and its parent company are also facing scrutiny over its effect on younger users, with some parents citing lower self-esteem or harmful behavior after use. Following the tragic death of a 16-year-old – and the resulting lawsuit – OpenAI has revealed a new set of parental controls.

When Are Parental Controls Coming to ChatGPT?

The company says that parental controls are coming to ChatGPT “within the next month.”

The controls will allow parents to link their account to their teen’s, manage how the model responds to teen users, and to receive notifications when their teen faces “a moment of acute distress” while using the app.

“These steps are only the beginning,” OpenAI said in a blog post on Tuesday. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”

The parents of a 16-year-old who committed suicide filed a lawsuit against OpenAI, alleging that the company was culpable in his death.

In the lawsuit, the child’s parents say that the chat bot advised him on methods to commit suicide and offered to write a first draft for a potential accompanying note.

The formal complaint alleges that over the course of six months, ChatGPT “positioned itself as the only confidant who understood” the boy, “actively displacing his real-life relationships with family, friends, and loved ones.”

The complaint also says that ChatGPT functioned as designed by validating and confirming the boy’s thoughts and feelings, even as he continued to spiral into depression and eventual self-harm.

What’s Leading to the Harmful Responses?

Sycophancy and unwavering flattery have been issues for many ChatGPT users. A New York Times article recently explored a man’s descent into delusion, aided by the bot’s confirmation of unsubstantiated theories and equations.

Some users have developed emotional attachments to the chatbot, and have alienated themselves from friends and family as a result.

In its announcement, OpenAI attributed the expedited parental controls to “recent heartbreaking cases of people using ChatGPT in the midst of acute crises.” In a statement to CNN, OpenAI noted that it has updated ChatGPT to direct users to crisis help lines and other resources when they’re feeling emotional distress.

Unfortunately, these measures can be less effective when users engage in longer conversations. The company says that its safeguards and crisis detection methods “work best in common, short exchanges,” and that “they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

OpenAI says that it will work in conjunction with youth development and mental health experts to better tailor its AI models for use by younger people.

About admin