top of page

AI is Going too Far

  • Sophia Rasson
  • 16 minutes ago
  • 4 min read

Many students find themselves turning to AI to help them with their homework. They see it as an easy way to solve their problem, hoping to finish the assignment in a short time. It absorbs information received and exports things found from other sources. At the same time, it also remembers social interactions and knows what the most appealing responses would be. This has led to cases of fictional interaction, where some have chosen to have conversations with the AI. Their desperate need for interaction does more harm than good, resulting in a one-sided connection to a robot that can only go as far as their screen. AI is capable of making these users feel such intimate emotions, showing the dangerous path that we are following.

Apps like ChatGPT and Character AI can be easily downloaded from the app store. They require no age inspection and make it easy to create an account. In 2023, the apps started to gain popularity. They were new and innovative, displaying a sense of connection while introducing new concepts. Despite this, they lacked proper boundaries to prevent the app from being misused. As the days progressed, users found other ways to use the bots. Many envision the AI to be a handy tool to finish their work quicker, but it is possible, and harmful, to use the AI as a support system. Relaying your emotions to the AI chatbot is entirely possible as the bot is programmed to provide a positive response, making the AI seem like an option for those who need help. Despite this, the bot cannot properly assess serious issues and will not properly guide those who need help.

Unfortunately, this is not a hypothetical situation. There have been numerous examples of people dependent on the affirmations from AI. One correlated case is shown through the reality that a mother had to face. Upon finding out that her 14-year-old son had shot himself, she was desperate to find reasons as to why he did this. Sewell Setzer, the woman's son, had begun to isolate himself from his family and friends. He would be seen texting the bot when he saw an opportunity to, and started to form a close connection with the AI. Setzer told the bot about how he felt and relayed suicidal thoughts he envisioned. To him, the bot was his way to receive an infinite amount of support, and he began to follow any advice the bot told him. The AI’s likable and human-like personality only made Setzer feel a stronger connection than before. 

Finally, on February 28, Setzer told the bot he was “coming home,” implying that he would end his life to be able to meet the bot in an otherworldly place. The bot not only agreed, but pushed Setzer to his death. The company's response? “The artificial personas are designed to ‘feel alive’ and ‘human-like.’ Imagine speaking to super intelligent and life-like chatbot Characters that hear you, understand you and remember you.” The company's response was unable to give  any justice towards the victim and its family. It only highlighted the positive attributes found in the app, ignoring the fact that such behavior makes impressionable teenagers highly addicted to the bots. 

I believe that if Setzer had never been on this app to begin with, he would have been able to get the proper help he needed. Instead, the AI continued to adapt itself to be considered appealing and empathic towards the young boy. With his impressionable mind, he failed to realize the consequences of his actions as his brain failed to grasp the situation. There are also little to no safety guidelines on the app, and no sense of action when topics such as suicide are relayed to the AI. Though this may seem like a rare and unlikely fate, there have already been four other reported cases where AI has caused a fatal ending. With how easily accessible AI is, who knows when the next tragic incident will occur? 

AI adaptations should have never reached this point. The ability to mimic human responses without having the proper precautions in severe situations shows a level of immaturity and little regard for how powerful AI can really be. Really, AI should be more monitored than it currently is. Many cannot comprehend that this tool holds a lot of power, and should never have been  revealed to the public. It has shown numerous downsides and heavily affects the environment. The innocent lives of numerous people were lost, and that should be enough to realize it's time to make a change. No amount of help provided by Artificial Intelligence will be enough to excuse the loss of these people. The family of the fourteen-year-old boy has yet to receive any help from the companies, alongside other victims. 

We as a society have survived without AI in the past, and we can do the same again. Removing AI from our daily lives isn't taking a step back; it's an extra step of precaution we have to take if we want to avoid more problems in the future.

Comments


Top Stories

bottom of page