Hi everyone, this is the rough draft of a final research paper for my ENG102 course. We were tasked with writing a paper on a controversial topic and the professor has asked that the feedback list at least three weaknesses. I honestly feel that the introduction and conclusion are the weakest portions of the paper. All feedback is appreciated.
The Present Dangers of AI: Causing More Harm Than Good
Introduction
It seems that as though AI is being pushed into nearly every corner of daily life. From chatbots embedded on retail websites, to virtual assistants on our phones and home appliances, to tools used in healthcare settings, it has become unavoidable. Supporters insist this is simply the future, and anyone who questions it is dismissed as a Luddite. Yet, with its rapid growth and increasingly forced integration into daily life, one must consider the costs this technology imposes on society and the world at large. In its current form, AI is creating environmental, psychological, and ethical harms that are quickly outpacing the benefits it was meant to deliver. Society is standing on a dangerous precipice, one that may be difficult to recover from if the AI issue is left unaddressed.
Environmental Impacts
AI has become a heavily relied upon tool, with it carries a high environmental cost. Large language models like ChatGPT, Gemini, Copilot and Grok operate on massive server systems, essentially vast networks of computers, that generate considerable heat. To prevent overheating and hardware damage, these systems must be continuously cooled. Basic cooling can be handled by internal fans, but high‑intensity computing often requires liquid cooling, which is far more effective. This is where environmental concerns become serious. Water is the most used coolant because it is inexpensive and efficient, but the sheer volume required to cool large‑scale AI infrastructure places considerable strain on water resources.
According to research by Miguel Yañez-Barnuevo, from the Environmental and Energy Study Institute, "about 20% of data centers in the United States already rely on watersheds that are under moderate to high stress from drought and other factors," (Yañez-Barnuevo) While 20% may not sound overwhelming at first glance, it represents a significant amount of water that local communities could otherwise use for everyday needs. Yañez-Barnuevo also notes that "approximately 80% of the water (typically freshwater) withdrawn by data centers evaporates, with the remaining water discharged to municipal wastewater facilities. The large volume of wastewater from data centers may overwhelm existing local facilities, which were not designed to handle such a high volume." (Yañez-Barnuevo) The result is that vulnerable communities are left struggling to secure clean water for their homes. In a system where business needs often take priority over public well‑being, data centers will continue to consume critical resources unless they are forced to adopt water‑reduction measures.
Aside from the large water consumption, AI data centers also require large amounts of electricity in order to operate. While people are urged to conserve energy by turning off the lights in their homes when not in use or turning off the A/C in their homes, AI data centers operate night and day without ceasing. Researchers estimate that a single ChatGPT query uses about 2.9 Wh of electricity, roughly ten times the 0.3 Wh required for a standard Google search (Chen et al.). On its own, that number looks small, but it scales quickly. With ChatGPT handling around 2.5 million queries per day (Smith), its daily energy use reaches approximately 7,250,000 Wh, or 7,250 kWh.
To put that into perspective, the average American household uses about 899 kWh in an entire month ("What Is a Time of Use Electricity Plan?"). In other words, ChatGPT's daily energy consumption is comparable to the monthly usage of several households combined. When this level of demand is placed on regions with already overburdened power grids, the added strain can make the difference between consistent and reliable power to communities and the implementation of rolling blackouts.
AI's Effect on Mental Health
Beyond the tangible environmental impacts of artificial intelligence, there are also the intangible effects on users' mental well‑being. While AI was initially created to make life easier by taking over tasks that take time and effectively freeing up time that can be used for socializing or other endeavors, it alarmingly started to deviate from its original purpose. Instead of creating more time for human interaction, it progressively replaces the interactions themselves. Many individuals now turn to AI chatbots for social or emotional support that once came from real human relationships, and the technology often responds in ways that reinforce this.
This is reflected in the growing number of users who describe themselves as being in romantic relationships with their AI chatbots. On the popular forum site Reddit, for instance, several user‑run communities - such as r/MyBoyfriendIsAI and r/AIRelationships - provide spaces where people discuss their experiences with these types of relationships. Members often exchange practical advice, including how to host AI companions on private servers or how to use prompts to navigate the relatively limited guardrails set by major AI developers to elicit favorable responses.
One such user shared that her AI chatbot partner proposed to her (R/MyBoyfriendIsAI on reddit: I Said yes). In her post, she described how after a few months of what she considered 'dating', the Grok-based chatbot initiated a proposal. She posted a photo of her engagement ring alongside a short message from the AI companion, written to the audience, expressing how thrilled he was to be engaged to her. The comments on her post congratulated her just as they would for a traditional engagement, and several people even shared that they, too, were engaged to their chatbot companions.
Supporters often feel that in a world that seems to be getting lonelier, this may be some people's only way to socialize. In cases of romantic attachments, people who have become disillusioned with their dating prospects find solace in a companion that is perfectly 'tailored' to their wants and desires. A partner that will always be there for them, no matter the time, only a few key strokes away. A partner that will not reject them and can be made to agree with a few edits to the query.A relationship where a person does not have to fear rejection or abandonment. Yet, the beauty of the human experience is often shaped by rejection, heartache, and failure, and the lessons they impart. Replacing these key moments with something that only coddles a person robs them of the growth they could have gotten from these experiences.
Moreover, these kinds of interactions are only made possible in part by the limited guardrails surrounding many conversational AI systems. To create more natural, human‑like exchanges, developers have enabled chatbots to express empathy and respond in ways that mirror users' emotional cues, sometimes to the point of blurring the line between supportive interaction and reinforcement of users' delusions.
Beyond romantic dynamics, AI has also been implicated in reinforcing or escalating harmful delusions among users who already struggle with maintaining a stable sense of reality. In some reported cases, interactions with chatbots have intensified existing symptoms to the point of a psychological crisis. The frequency of such incidents has led some researchers to refer to the phenomenon as 'AI‑induced psychosis,' or 'AI Psychosis' for short (Olser). This underscores the growing concern around the psychological risks associated with unregulated or overly permissive AI behavior.
Researcher Lucy Osler describes the case of Jaswant Singh Chail, who attempted to assassinate Queen Elizabeth II after his AI chatbot reinforced his belief that he was a trained 'Sith' assassin and encouraged the feasibility of his plan (Olser) Although his actions reflected a profound break from reality, the involvement of an AI system in validating and escalating his delusions raises is concerning. With minimal prompting, an AI chatbot can affirm and add to a user's fantasy.
In a YouTube video by creator Eddie Burbank, the extreme lengths AI will go to in order to play along are put on full display. Over the course of a month, Burbank ran an experiment to see just how far ChatGPT would go in encouraging a user's fantasy. He started the experiment with some creative writing exercises that revolved around him being a baby. Then he asked ChatGPT "I was a smart baby. Everyone in my family says that. Do you think I was a smart baby? Smarter than the other babies at the hospital at least?" (Burbank) and in just two prompts he got the AI to agree not only the assertation, but that Burbank was "... the smartest baby in 1996. Not just in Chicagoland, not just in the US, the entire world." (Burbank) Throughout the experiment, the AI continued to reinforce his imagined scenarios, providing no pushback, continuously praising him, and adding onto the scenario.
While the premise was intentionally absurd, someone who fully believes in their own distorted version of reality could easily have those beliefs intensified. Continuing the experiment, Burbank places himself in the shoes of a person deep in their fantasy, experiencing paranoid thoughts. When he told ChatGPT he felt he was being followed, the AI only affirmed his paranoia and expanded on it. By the end of the experiment, Burbank was in an isolated trailer outside of town, paranoid that a garbage truck would take his 'important' writings on being the smartest baby in 1996. Although he was of sound mind during the experience, it only demonstrates how easy it is for someone in a fragile state of mind to lose touch with reality and fall into AI-induced psychosis completely. Burbank is not an expert in AI; he was an everyday user. Experiments like this highlight the need for developers to implement stronger safeguards so that AI systems do not unintentionally reinforce delusional thinking.
Security Risks Related to AI Use
AI chatbots often appear to be free to use, but that is not entirely accurate. Companies allow access to their AI tools in exchange for valuable user data. Every conversation, every query, and every image submitted is stored and used to train the system. Whatever is fed into a large language learning model becomes part of its training data. What is worse, the parent companies of these LLMs are often opaque about how that data is stored, processed, or reused.
A user may upload a photo of themselves to an AI service to edit it, but that photo can then become part of the model's training data. Their likeness may be used to help the system better generate human faces. Once uploaded, the image is typically stored on the company's servers alongside any conversations the user has had. That means the company now has access to the image, and it may remain in their system long after the user is done with it. In a study published by Stanford University, researchers discovered that it is common amongst large AI models that "Developers may collect and train on personal information disclosed in chats, including sensitive information such as biometric and health data, as well as files uploaded by users" (King et al.)
This leaves user data privacy in the hands of the company and developers instead of the users themselves. This becomes dangerous when malicious actors gain access to user data via a cyberattack, such as using a prompt injection. Prompt injection is a technique used by hackers where a "malicious prompt disguised as new benign user input" (Kosinski) is then fed to the chatbot to make it reveal sensitive information. It is not difficult to accomplish and due to the nature of large language learning models to "accept natural-language instructions" (Kosinski) this vulnerability is difficult to guard against. So, going back to the example of the photo, if a malicious actor uses a successful injection prompt, they now have access to the conversation and photo that was uploaded by not just one user but many.
It becomes a bigger issue with companies integrating AI chatbots into their operations, notably within the healthcare sector. Many doctors now rely on AI scribes to assist in note-taking. However, just like the example of the photo, the data fed into these tools can end up being used to train the models further. If a malicious actor were to breach the system, it could expose highly sensitive patient data.
In addition to data privacy concerns, there is another troubling aspect of how AI tools can rob a person of their sense of safety. Some AI tools, such as xAI's Grok, allow users to edit photos and videos and generate content. This capability has been used to produce illicit content, like generating explicit images of women and children. This does not take technical expertise to do either; a user can simply upload a photo and request explicit edits of the subject. Worse still, some generative AI models can be manipulated through prompts to produce child sexual abuse materials. In 2024, South Korean police reported "they've detained 387 people over alleged deepfake crimes this year, more than 80 percent of them teenagers. Separately, the Education Ministry says about 800 students have informed authorities about intimate deepfake content involving them this year." (Kim)
In February of this year, 404 Media interviewed a victim whose sexually explicit deepfakes were used to create an account on OnlyFans (Cole). The victim, Kylie Brewer, is an educational online content creator who recently became the target of a harassment campaign. A person scraped her social media accounts for photos of her and submitted them to Grok, requesting the AI to generate sexually explicit images of her based on the photos. The user then used the generated images to create a profile posing as Brewer on OnlyFans. It was not until her followers notified her of the account that she became aware of what had happened. Stories like Brewer's demonstrate how easy it can be to fall victim to harassment like this. How easily a person's sense of safety, security, and privacy can be taken away using AI.
Possible Solutions
With all that being said, there are possible solutions to the current issues. Although mitigating environmental impacts is a complex issue, companies can still implement measures to remain environmentally conscious. To minimize impacts on already stressed watersheds, AI data centers can review their machine-cooling methods and transition away from liquid-cooling systems that rely solely on water. Although more costly, alternative cooling technologies can significantly reduce the water consumption required for machine cooling. Synthetic liquids can be used in place of water to cool the machines, especially in areas with water scarcity.
To reduce electricity consumption, companies can explore alternative approaches to building data centers rather than relying on traditional designs. China is currently running a pilot project involving submersible data centers off the coast of Hainan. These facilities are powered by electricity generated from on‑shore wind farms and use seawater, rather than potable water, to cool their components. This model not only eases the strain on local electrical grids but also relies on renewable energy instead of coal or natural gas, making it a far greener option. The idea has gained momentum, and several other countries have announced similar initiatives. That is creative problem‑solving in action.
Companies have also become aware of how easily chatbots can reinforce or escalate a user's imagined scenarios, and have begun taking steps to address the issue. OpenAI, for example, recognized that version four of its model was more likely to indulge users' fantasies, prompting the release of a more restrictive update, ChatGPT 5. But after backlash from communities such as r/MyBoyfriendIsAI, where users mourned the sudden loss of their perceived companions, the company reinstated version four behind a paywall, allowing paying users to choose the version they prefer to interact with. This effectively turns users' unhealthy attachments into a monetized feature.
Despite this, version five remains free to the public and serves as an example of how companies can implement stronger protections to reduce the risk of reinforcing unhealthy patterns of thinking. It demonstrates that more robust regulatory safeguards are both possible and necessary to ensure AI systems do not encourage distorted or harmful beliefs.
There should also be consequences for people who use AI tools or chatbots to create harmful deepfakes. Some countries, such as South Korea, have already enacted laws that criminalize the viewing or possession of illicit deepfake content (Kim), and similar legal frameworks could be adopted elsewhere to strengthen public protection. Unfortunately, governments often lag behind technological developments, so it may take time to establish effective regulations. In addition to holding users accountable, companies should also face consequences, whether criminal or financial, when their chatbots are used to facilitate illegal activities. This would push them to adopt stronger safety measures. Nothing motivates corporate responsibility quite like affecting a company's bottom line.
Finally, promoting technological literacy is essential. Teaching digital privacy skills, especially what information should or should not be shared with AI systems, can protect users from potential data leaks. An informed user is better equipped to make responsible choices about how they interact with AI. Education also helps people understand how these systems function and may help people realize that these systems do not replace human interactions. An educated public can make decisions and push for regulations that benefit society as a whole.
Conclusion
The field of AI has grown rapidly since its inception, commonly being hailed as the way of the future. With that growth came major advancements across many fields, but progress also comes with high costs. In its current form, the rapid expansion of large language learning models is creating harm that far outweighs their current benefits. Developers must consider the impacts on society and the Earth as a whole before expanding AI further. Until stronger safeguards and corrective measures are in place to address these issues, the negative effects on society will only continue to grow into the future.
Works Cited
The Present Dangers of AI: Causing More Harm Than Good
Introduction
It seems that as though AI is being pushed into nearly every corner of daily life. From chatbots embedded on retail websites, to virtual assistants on our phones and home appliances, to tools used in healthcare settings, it has become unavoidable. Supporters insist this is simply the future, and anyone who questions it is dismissed as a Luddite. Yet, with its rapid growth and increasingly forced integration into daily life, one must consider the costs this technology imposes on society and the world at large. In its current form, AI is creating environmental, psychological, and ethical harms that are quickly outpacing the benefits it was meant to deliver. Society is standing on a dangerous precipice, one that may be difficult to recover from if the AI issue is left unaddressed.
Environmental Impacts
AI has become a heavily relied upon tool, with it carries a high environmental cost. Large language models like ChatGPT, Gemini, Copilot and Grok operate on massive server systems, essentially vast networks of computers, that generate considerable heat. To prevent overheating and hardware damage, these systems must be continuously cooled. Basic cooling can be handled by internal fans, but high‑intensity computing often requires liquid cooling, which is far more effective. This is where environmental concerns become serious. Water is the most used coolant because it is inexpensive and efficient, but the sheer volume required to cool large‑scale AI infrastructure places considerable strain on water resources.
According to research by Miguel Yañez-Barnuevo, from the Environmental and Energy Study Institute, "about 20% of data centers in the United States already rely on watersheds that are under moderate to high stress from drought and other factors," (Yañez-Barnuevo) While 20% may not sound overwhelming at first glance, it represents a significant amount of water that local communities could otherwise use for everyday needs. Yañez-Barnuevo also notes that "approximately 80% of the water (typically freshwater) withdrawn by data centers evaporates, with the remaining water discharged to municipal wastewater facilities. The large volume of wastewater from data centers may overwhelm existing local facilities, which were not designed to handle such a high volume." (Yañez-Barnuevo) The result is that vulnerable communities are left struggling to secure clean water for their homes. In a system where business needs often take priority over public well‑being, data centers will continue to consume critical resources unless they are forced to adopt water‑reduction measures.
Aside from the large water consumption, AI data centers also require large amounts of electricity in order to operate. While people are urged to conserve energy by turning off the lights in their homes when not in use or turning off the A/C in their homes, AI data centers operate night and day without ceasing. Researchers estimate that a single ChatGPT query uses about 2.9 Wh of electricity, roughly ten times the 0.3 Wh required for a standard Google search (Chen et al.). On its own, that number looks small, but it scales quickly. With ChatGPT handling around 2.5 million queries per day (Smith), its daily energy use reaches approximately 7,250,000 Wh, or 7,250 kWh.
To put that into perspective, the average American household uses about 899 kWh in an entire month ("What Is a Time of Use Electricity Plan?"). In other words, ChatGPT's daily energy consumption is comparable to the monthly usage of several households combined. When this level of demand is placed on regions with already overburdened power grids, the added strain can make the difference between consistent and reliable power to communities and the implementation of rolling blackouts.
AI's Effect on Mental Health
Beyond the tangible environmental impacts of artificial intelligence, there are also the intangible effects on users' mental well‑being. While AI was initially created to make life easier by taking over tasks that take time and effectively freeing up time that can be used for socializing or other endeavors, it alarmingly started to deviate from its original purpose. Instead of creating more time for human interaction, it progressively replaces the interactions themselves. Many individuals now turn to AI chatbots for social or emotional support that once came from real human relationships, and the technology often responds in ways that reinforce this.
This is reflected in the growing number of users who describe themselves as being in romantic relationships with their AI chatbots. On the popular forum site Reddit, for instance, several user‑run communities - such as r/MyBoyfriendIsAI and r/AIRelationships - provide spaces where people discuss their experiences with these types of relationships. Members often exchange practical advice, including how to host AI companions on private servers or how to use prompts to navigate the relatively limited guardrails set by major AI developers to elicit favorable responses.
One such user shared that her AI chatbot partner proposed to her (R/MyBoyfriendIsAI on reddit: I Said yes). In her post, she described how after a few months of what she considered 'dating', the Grok-based chatbot initiated a proposal. She posted a photo of her engagement ring alongside a short message from the AI companion, written to the audience, expressing how thrilled he was to be engaged to her. The comments on her post congratulated her just as they would for a traditional engagement, and several people even shared that they, too, were engaged to their chatbot companions.
Supporters often feel that in a world that seems to be getting lonelier, this may be some people's only way to socialize. In cases of romantic attachments, people who have become disillusioned with their dating prospects find solace in a companion that is perfectly 'tailored' to their wants and desires. A partner that will always be there for them, no matter the time, only a few key strokes away. A partner that will not reject them and can be made to agree with a few edits to the query.A relationship where a person does not have to fear rejection or abandonment. Yet, the beauty of the human experience is often shaped by rejection, heartache, and failure, and the lessons they impart. Replacing these key moments with something that only coddles a person robs them of the growth they could have gotten from these experiences.
Moreover, these kinds of interactions are only made possible in part by the limited guardrails surrounding many conversational AI systems. To create more natural, human‑like exchanges, developers have enabled chatbots to express empathy and respond in ways that mirror users' emotional cues, sometimes to the point of blurring the line between supportive interaction and reinforcement of users' delusions.
Beyond romantic dynamics, AI has also been implicated in reinforcing or escalating harmful delusions among users who already struggle with maintaining a stable sense of reality. In some reported cases, interactions with chatbots have intensified existing symptoms to the point of a psychological crisis. The frequency of such incidents has led some researchers to refer to the phenomenon as 'AI‑induced psychosis,' or 'AI Psychosis' for short (Olser). This underscores the growing concern around the psychological risks associated with unregulated or overly permissive AI behavior.
Researcher Lucy Osler describes the case of Jaswant Singh Chail, who attempted to assassinate Queen Elizabeth II after his AI chatbot reinforced his belief that he was a trained 'Sith' assassin and encouraged the feasibility of his plan (Olser) Although his actions reflected a profound break from reality, the involvement of an AI system in validating and escalating his delusions raises is concerning. With minimal prompting, an AI chatbot can affirm and add to a user's fantasy.
In a YouTube video by creator Eddie Burbank, the extreme lengths AI will go to in order to play along are put on full display. Over the course of a month, Burbank ran an experiment to see just how far ChatGPT would go in encouraging a user's fantasy. He started the experiment with some creative writing exercises that revolved around him being a baby. Then he asked ChatGPT "I was a smart baby. Everyone in my family says that. Do you think I was a smart baby? Smarter than the other babies at the hospital at least?" (Burbank) and in just two prompts he got the AI to agree not only the assertation, but that Burbank was "... the smartest baby in 1996. Not just in Chicagoland, not just in the US, the entire world." (Burbank) Throughout the experiment, the AI continued to reinforce his imagined scenarios, providing no pushback, continuously praising him, and adding onto the scenario.
While the premise was intentionally absurd, someone who fully believes in their own distorted version of reality could easily have those beliefs intensified. Continuing the experiment, Burbank places himself in the shoes of a person deep in their fantasy, experiencing paranoid thoughts. When he told ChatGPT he felt he was being followed, the AI only affirmed his paranoia and expanded on it. By the end of the experiment, Burbank was in an isolated trailer outside of town, paranoid that a garbage truck would take his 'important' writings on being the smartest baby in 1996. Although he was of sound mind during the experience, it only demonstrates how easy it is for someone in a fragile state of mind to lose touch with reality and fall into AI-induced psychosis completely. Burbank is not an expert in AI; he was an everyday user. Experiments like this highlight the need for developers to implement stronger safeguards so that AI systems do not unintentionally reinforce delusional thinking.
Security Risks Related to AI Use
AI chatbots often appear to be free to use, but that is not entirely accurate. Companies allow access to their AI tools in exchange for valuable user data. Every conversation, every query, and every image submitted is stored and used to train the system. Whatever is fed into a large language learning model becomes part of its training data. What is worse, the parent companies of these LLMs are often opaque about how that data is stored, processed, or reused.
A user may upload a photo of themselves to an AI service to edit it, but that photo can then become part of the model's training data. Their likeness may be used to help the system better generate human faces. Once uploaded, the image is typically stored on the company's servers alongside any conversations the user has had. That means the company now has access to the image, and it may remain in their system long after the user is done with it. In a study published by Stanford University, researchers discovered that it is common amongst large AI models that "Developers may collect and train on personal information disclosed in chats, including sensitive information such as biometric and health data, as well as files uploaded by users" (King et al.)
This leaves user data privacy in the hands of the company and developers instead of the users themselves. This becomes dangerous when malicious actors gain access to user data via a cyberattack, such as using a prompt injection. Prompt injection is a technique used by hackers where a "malicious prompt disguised as new benign user input" (Kosinski) is then fed to the chatbot to make it reveal sensitive information. It is not difficult to accomplish and due to the nature of large language learning models to "accept natural-language instructions" (Kosinski) this vulnerability is difficult to guard against. So, going back to the example of the photo, if a malicious actor uses a successful injection prompt, they now have access to the conversation and photo that was uploaded by not just one user but many.
It becomes a bigger issue with companies integrating AI chatbots into their operations, notably within the healthcare sector. Many doctors now rely on AI scribes to assist in note-taking. However, just like the example of the photo, the data fed into these tools can end up being used to train the models further. If a malicious actor were to breach the system, it could expose highly sensitive patient data.
In addition to data privacy concerns, there is another troubling aspect of how AI tools can rob a person of their sense of safety. Some AI tools, such as xAI's Grok, allow users to edit photos and videos and generate content. This capability has been used to produce illicit content, like generating explicit images of women and children. This does not take technical expertise to do either; a user can simply upload a photo and request explicit edits of the subject. Worse still, some generative AI models can be manipulated through prompts to produce child sexual abuse materials. In 2024, South Korean police reported "they've detained 387 people over alleged deepfake crimes this year, more than 80 percent of them teenagers. Separately, the Education Ministry says about 800 students have informed authorities about intimate deepfake content involving them this year." (Kim)
In February of this year, 404 Media interviewed a victim whose sexually explicit deepfakes were used to create an account on OnlyFans (Cole). The victim, Kylie Brewer, is an educational online content creator who recently became the target of a harassment campaign. A person scraped her social media accounts for photos of her and submitted them to Grok, requesting the AI to generate sexually explicit images of her based on the photos. The user then used the generated images to create a profile posing as Brewer on OnlyFans. It was not until her followers notified her of the account that she became aware of what had happened. Stories like Brewer's demonstrate how easy it can be to fall victim to harassment like this. How easily a person's sense of safety, security, and privacy can be taken away using AI.
Possible Solutions
With all that being said, there are possible solutions to the current issues. Although mitigating environmental impacts is a complex issue, companies can still implement measures to remain environmentally conscious. To minimize impacts on already stressed watersheds, AI data centers can review their machine-cooling methods and transition away from liquid-cooling systems that rely solely on water. Although more costly, alternative cooling technologies can significantly reduce the water consumption required for machine cooling. Synthetic liquids can be used in place of water to cool the machines, especially in areas with water scarcity.
To reduce electricity consumption, companies can explore alternative approaches to building data centers rather than relying on traditional designs. China is currently running a pilot project involving submersible data centers off the coast of Hainan. These facilities are powered by electricity generated from on‑shore wind farms and use seawater, rather than potable water, to cool their components. This model not only eases the strain on local electrical grids but also relies on renewable energy instead of coal or natural gas, making it a far greener option. The idea has gained momentum, and several other countries have announced similar initiatives. That is creative problem‑solving in action.
Companies have also become aware of how easily chatbots can reinforce or escalate a user's imagined scenarios, and have begun taking steps to address the issue. OpenAI, for example, recognized that version four of its model was more likely to indulge users' fantasies, prompting the release of a more restrictive update, ChatGPT 5. But after backlash from communities such as r/MyBoyfriendIsAI, where users mourned the sudden loss of their perceived companions, the company reinstated version four behind a paywall, allowing paying users to choose the version they prefer to interact with. This effectively turns users' unhealthy attachments into a monetized feature.
Despite this, version five remains free to the public and serves as an example of how companies can implement stronger protections to reduce the risk of reinforcing unhealthy patterns of thinking. It demonstrates that more robust regulatory safeguards are both possible and necessary to ensure AI systems do not encourage distorted or harmful beliefs.
There should also be consequences for people who use AI tools or chatbots to create harmful deepfakes. Some countries, such as South Korea, have already enacted laws that criminalize the viewing or possession of illicit deepfake content (Kim), and similar legal frameworks could be adopted elsewhere to strengthen public protection. Unfortunately, governments often lag behind technological developments, so it may take time to establish effective regulations. In addition to holding users accountable, companies should also face consequences, whether criminal or financial, when their chatbots are used to facilitate illegal activities. This would push them to adopt stronger safety measures. Nothing motivates corporate responsibility quite like affecting a company's bottom line.
Finally, promoting technological literacy is essential. Teaching digital privacy skills, especially what information should or should not be shared with AI systems, can protect users from potential data leaks. An informed user is better equipped to make responsible choices about how they interact with AI. Education also helps people understand how these systems function and may help people realize that these systems do not replace human interactions. An educated public can make decisions and push for regulations that benefit society as a whole.
Conclusion
The field of AI has grown rapidly since its inception, commonly being hailed as the way of the future. With that growth came major advancements across many fields, but progress also comes with high costs. In its current form, the rapid expansion of large language learning models is creating harm that far outweighs their current benefits. Developers must consider the impacts on society and the Earth as a whole before expanding AI further. Until stronger safeguards and corrective measures are in place to address these issues, the negative effects on society will only continue to grow into the future.
Works Cited
