Phoebe Boysen
Adam Korman
ENG 102
November 6, 2025
AI used to feel so far away, inaccessible even, but now it's a click away on every single screen that you own. Students use it to complete homework. Workers use it to draft simple emails. Moms use it to see what their dog looks like in different costumes. Everyday users ask it simple questions that can easily be answered by a human. AI systems are influencing choices and people are using AI without even realizing it. This widespread integration raises ethical concerns and affects how people learn, work, and even perceive the truth. The fast rate that AI is growing has immense upsides, things like speed, access, and innovation. However, the downsides heavily dominate the pros. It has created real issues, things like weakening how we think, privacy violations, deepfakes, job loss, and the unnecessary use of water and electricity. To create a balance between the pros and cons, we need clear set rules so that users and the environment stay protected.
AI moved from tech labs to our pockets in just a few years. Easily accessible apps make it difficult to steer away from using them, and with the rate it has been evolving it becoming more accurate and precise. Schools, students, and employees accepted AI at a rapid pace. A pace that the regulations could not keep up with. With this fast adaptation, it became easy to get hooked and build habits around AI.
AI feels like it's helpful because it can save you time, but 'saving time' also skips over the important aspects like critical thinking and learning that build skills which will almost always result in reliance on AI. In one experiment with over 300 people, many people chose AI advice over advice from humans just because the AI answer sounded confident. The researchers found that, "the mere knowledge of advice being generated by an AI causes people to over rely on it, even when it contradicts available information and their own judgment." (Klingbeil 2024) When using AI to do simple tasks and homework, you study and practice those skills less. The long-term results of this cause a lack of thought and can become irreversible.
Certain AI tools, like Sora AI, can mimic voices, faces, and actions extremely accurately. These are called deep fakes. They can severely damage a person's reputation, spread lies about them, trick people and employees. Tuhin explains that deepfakes violate consent and identity. He says that once your face or image is out to the public, it is easy to lose control of what happens with it. As Moehring points out, "the creation and distribution of deepfakes bypass the fundamental ethical principle of respect for another individual's autonomy when they are created without obtaining the person's consent." (Tuhin 2025) Although there are some rules that are growing, they aren't getting passed quickly enough and lots of damage has already been done. Even with laws in place, victims still face slow removals and confusing regulations from app to app. For example, in 2023, a deepfake video of Ukrainian President Volodymyr Zelenskyy telling his troops to surrender spread widely before it was debunked. This incident showed how misinformation powered by AI can be weaponized to manipulate emotions and weaken institutions.
The Bureau of Labor Statistics (BLS) projects that AI will move around tasks
depending on the different occupation rather than completely erasing jobs, BLS says "Each occupation involves a set of tasks.... Although new technologies may change the composition or weighting of tasks performed by workers in an occupation, sometimes dramatically, they may still have no employment impacts." (Bureau of Labor Statistics 2025) However, these results vary from job to job. Some jobs might get eliminated, while some won't be affected at all. For example, paralegals may now use AI to summarize case documents, while data analysts use it to clean datasets faster. These tools don't erase work but redefine it, demanding new digital literacy skills that schools are only beginning to teach. Some roles with expand with jobs that have relation to data and creative work. Other jobs that have to do with writing, like journalism, will be heavily affected by AI. This shift and uncertainty can cause major disruptions
AI is not just a digital problem; it also affects the environment with the resources that it uses to run and/or be created. Every question or task that is asked of AI uses mass amounts of energy and water to process and answer accurately. MIT News mentions "The computational power required to train generative AI models that often have billions of parameters, such as OpenAI's GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid. Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems." (Zewe 2024). Some researchers estimate that training one large AI model can emit as much carbon as five cars over their entire lifetimes. The hidden cost of convenience is significant, especially in regions already battling drought and heat. To make AI sustainable, producers need to report how much water and energy is being used. With this information, they should set a maximum amount of energy and water that can be used while still being sustainable. Developers should try to focus of making smaller AI systems that don't drain nonrenewable resources. Without strong regulations around AI, the environment will continue to struggle.
When we combine all these issues, it becomes abundantly clear that action needs to be taken. AI is no longer just a tech problem; it is a societal problem. If schools allow students to use and rely on AI for writing and thinking, the next generations will struggle with thinking on their own. If deepfakes continue to spread and evolve, people will lose trust in news, in each other, and anything posted online. If jobs change too rapidly, it will create issues for people will specific degrees and make it difficult to find work. And if companies continue to use mass amounts of electricity and water, climate change will rapidly increase. AI has slowly reshaped everything, things like how we live and learn have irreversibly changed.
While long term laws take time, there are some immediate actions that we can take. The first step we can take is to set clear rules and boundaries on schools. There should be more strict boundaries with students using AI, rules that have real consequences. It can help students with things like brainstorming or checking grammar, but it should not replace student work or thought. They can regulate these rules by watching students' screens, blocking AI chat boxes from school issued computers, and monitoring phone use when doing assignments. If students learn to treat AI as a tool rather than a substitute, it could strengthen education instead of weakening it. Schools could integrate AI literacy courses that teach how to verify sources, spot deepfakes, and protect data privacy. The next solution is implementing developer transparency. Forcing developers to be public should result in less water and electricity usage. They should be forced to take data from their usage of water and electricity and publish it publicly. The last action we can take is stronger protection against privacy. Users should be able to decide what happens with their information. They should be able to choose if their personal data and social media posts are used to train AI systems. There should also be more regulations around debt and quicker ways to remove content and personal information from AI databases.
Permanently fixing the issues that AI causes will take more than just adjusting a few things, we need actual permanent solutions. The first solution would be to create a national law around AI. This law would be used in schools, companies, and everyday lives. The law requires companies to test AI written sources for bias, label when AI is being used, protect people's personal information, and allow people to be able to remove themselves from AI databases. The second solution is creating protection for victims of deepfakes. People who create harmful deepfakes should face legal punishments. The sources of the deep fakes should allow for fast removal. The deepfakes should also be obviously watermarked and say that it is AI. The last long-term solution is to help workers transition into new work duties. BLS researchers suggest community college programs that teach people to manage working alongside AI.
The most common counter argument is that AIs benefits are greater than its harms. This might seem true when you look at how useful AI can potentially be, but thinking this way ignores the real damage that is happening. As Zhang and his colleagues explain, AI has already caused bias, mistakes, and misinformation. Zhang and his colleagues mention, "Ethical issues, such as privacy leakage, discrimination, unemployment, and security risks, brought about by AI systems have caused great trouble to people." (Zhang 2023) Tuhin also warns that deepfakes have permanently destroyed peoples' reputations and causes confusion. He says, "This has led to severe emotional, reputational, and even legal consequences for the victims." (Tuhin 2025) Another common argument is that AI will create more jobs that it removes. Some people argue that AI will create more job opportunities. But even if the total number of jobs remains the same, some communities will be affected more than others. Without proper job training to work with AI, workers will be left behind. The final argument is that regulation around AI will slow innovation. This is just simply not true. Strong rules help innovation because the companies that create the systems can plan and build better systems that are more efficient. Zhang and Zewe both argue that transparency and sustainability make technology more trustworthy and long-lasting.
AI has transformed the world faster than any other technology. While it helps us write, learn, and connect, it also brings serious harm. It weakens human thought, invades privacy, affects jobs, and damages the environment-costs that outweigh its benefits. Instead of banning AI, we need to guide it responsibly. As AI continues to grow, the most important thing we can do is stay aware. Everyone, from students to developers, has a role in shaping how it's used. Learning about AI's limits and holding companies accountable can help prevent future harm. The goal isn't to fear technology but to understand it, question it, and use it responsibly so progress helps people, not replaces them. Ethics matters because technology reflects the people who build and use it. If we ignore the moral side of AI, we risk creating tools that harm more than help. Being thoughtful now can save future generations from dealing with problems we could have prevented today. Short-term solutions can make a real difference if we begin now, while long-term actions like strong laws, worker training, and sustainability standards will protect future generations. The challenge of this century isn't creating smarter machines but raising wiser humans to guide them, ensuring AI strengthens society rather than replaces it.
Sources
Adam Korman
ENG 102
November 6, 2025
AI ethics
AI used to feel so far away, inaccessible even, but now it's a click away on every single screen that you own. Students use it to complete homework. Workers use it to draft simple emails. Moms use it to see what their dog looks like in different costumes. Everyday users ask it simple questions that can easily be answered by a human. AI systems are influencing choices and people are using AI without even realizing it. This widespread integration raises ethical concerns and affects how people learn, work, and even perceive the truth. The fast rate that AI is growing has immense upsides, things like speed, access, and innovation. However, the downsides heavily dominate the pros. It has created real issues, things like weakening how we think, privacy violations, deepfakes, job loss, and the unnecessary use of water and electricity. To create a balance between the pros and cons, we need clear set rules so that users and the environment stay protected.
AI moved from tech labs to our pockets in just a few years. Easily accessible apps make it difficult to steer away from using them, and with the rate it has been evolving it becoming more accurate and precise. Schools, students, and employees accepted AI at a rapid pace. A pace that the regulations could not keep up with. With this fast adaptation, it became easy to get hooked and build habits around AI.
AI feels like it's helpful because it can save you time, but 'saving time' also skips over the important aspects like critical thinking and learning that build skills which will almost always result in reliance on AI. In one experiment with over 300 people, many people chose AI advice over advice from humans just because the AI answer sounded confident. The researchers found that, "the mere knowledge of advice being generated by an AI causes people to over rely on it, even when it contradicts available information and their own judgment." (Klingbeil 2024) When using AI to do simple tasks and homework, you study and practice those skills less. The long-term results of this cause a lack of thought and can become irreversible.
Certain AI tools, like Sora AI, can mimic voices, faces, and actions extremely accurately. These are called deep fakes. They can severely damage a person's reputation, spread lies about them, trick people and employees. Tuhin explains that deepfakes violate consent and identity. He says that once your face or image is out to the public, it is easy to lose control of what happens with it. As Moehring points out, "the creation and distribution of deepfakes bypass the fundamental ethical principle of respect for another individual's autonomy when they are created without obtaining the person's consent." (Tuhin 2025) Although there are some rules that are growing, they aren't getting passed quickly enough and lots of damage has already been done. Even with laws in place, victims still face slow removals and confusing regulations from app to app. For example, in 2023, a deepfake video of Ukrainian President Volodymyr Zelenskyy telling his troops to surrender spread widely before it was debunked. This incident showed how misinformation powered by AI can be weaponized to manipulate emotions and weaken institutions.
The Bureau of Labor Statistics (BLS) projects that AI will move around tasks
depending on the different occupation rather than completely erasing jobs, BLS says "Each occupation involves a set of tasks.... Although new technologies may change the composition or weighting of tasks performed by workers in an occupation, sometimes dramatically, they may still have no employment impacts." (Bureau of Labor Statistics 2025) However, these results vary from job to job. Some jobs might get eliminated, while some won't be affected at all. For example, paralegals may now use AI to summarize case documents, while data analysts use it to clean datasets faster. These tools don't erase work but redefine it, demanding new digital literacy skills that schools are only beginning to teach. Some roles with expand with jobs that have relation to data and creative work. Other jobs that have to do with writing, like journalism, will be heavily affected by AI. This shift and uncertainty can cause major disruptions
AI is not just a digital problem; it also affects the environment with the resources that it uses to run and/or be created. Every question or task that is asked of AI uses mass amounts of energy and water to process and answer accurately. MIT News mentions "The computational power required to train generative AI models that often have billions of parameters, such as OpenAI's GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid. Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems." (Zewe 2024). Some researchers estimate that training one large AI model can emit as much carbon as five cars over their entire lifetimes. The hidden cost of convenience is significant, especially in regions already battling drought and heat. To make AI sustainable, producers need to report how much water and energy is being used. With this information, they should set a maximum amount of energy and water that can be used while still being sustainable. Developers should try to focus of making smaller AI systems that don't drain nonrenewable resources. Without strong regulations around AI, the environment will continue to struggle.
When we combine all these issues, it becomes abundantly clear that action needs to be taken. AI is no longer just a tech problem; it is a societal problem. If schools allow students to use and rely on AI for writing and thinking, the next generations will struggle with thinking on their own. If deepfakes continue to spread and evolve, people will lose trust in news, in each other, and anything posted online. If jobs change too rapidly, it will create issues for people will specific degrees and make it difficult to find work. And if companies continue to use mass amounts of electricity and water, climate change will rapidly increase. AI has slowly reshaped everything, things like how we live and learn have irreversibly changed.
While long term laws take time, there are some immediate actions that we can take. The first step we can take is to set clear rules and boundaries on schools. There should be more strict boundaries with students using AI, rules that have real consequences. It can help students with things like brainstorming or checking grammar, but it should not replace student work or thought. They can regulate these rules by watching students' screens, blocking AI chat boxes from school issued computers, and monitoring phone use when doing assignments. If students learn to treat AI as a tool rather than a substitute, it could strengthen education instead of weakening it. Schools could integrate AI literacy courses that teach how to verify sources, spot deepfakes, and protect data privacy. The next solution is implementing developer transparency. Forcing developers to be public should result in less water and electricity usage. They should be forced to take data from their usage of water and electricity and publish it publicly. The last action we can take is stronger protection against privacy. Users should be able to decide what happens with their information. They should be able to choose if their personal data and social media posts are used to train AI systems. There should also be more regulations around debt and quicker ways to remove content and personal information from AI databases.
Permanently fixing the issues that AI causes will take more than just adjusting a few things, we need actual permanent solutions. The first solution would be to create a national law around AI. This law would be used in schools, companies, and everyday lives. The law requires companies to test AI written sources for bias, label when AI is being used, protect people's personal information, and allow people to be able to remove themselves from AI databases. The second solution is creating protection for victims of deepfakes. People who create harmful deepfakes should face legal punishments. The sources of the deep fakes should allow for fast removal. The deepfakes should also be obviously watermarked and say that it is AI. The last long-term solution is to help workers transition into new work duties. BLS researchers suggest community college programs that teach people to manage working alongside AI.
The most common counter argument is that AIs benefits are greater than its harms. This might seem true when you look at how useful AI can potentially be, but thinking this way ignores the real damage that is happening. As Zhang and his colleagues explain, AI has already caused bias, mistakes, and misinformation. Zhang and his colleagues mention, "Ethical issues, such as privacy leakage, discrimination, unemployment, and security risks, brought about by AI systems have caused great trouble to people." (Zhang 2023) Tuhin also warns that deepfakes have permanently destroyed peoples' reputations and causes confusion. He says, "This has led to severe emotional, reputational, and even legal consequences for the victims." (Tuhin 2025) Another common argument is that AI will create more jobs that it removes. Some people argue that AI will create more job opportunities. But even if the total number of jobs remains the same, some communities will be affected more than others. Without proper job training to work with AI, workers will be left behind. The final argument is that regulation around AI will slow innovation. This is just simply not true. Strong rules help innovation because the companies that create the systems can plan and build better systems that are more efficient. Zhang and Zewe both argue that transparency and sustainability make technology more trustworthy and long-lasting.
AI has transformed the world faster than any other technology. While it helps us write, learn, and connect, it also brings serious harm. It weakens human thought, invades privacy, affects jobs, and damages the environment-costs that outweigh its benefits. Instead of banning AI, we need to guide it responsibly. As AI continues to grow, the most important thing we can do is stay aware. Everyone, from students to developers, has a role in shaping how it's used. Learning about AI's limits and holding companies accountable can help prevent future harm. The goal isn't to fear technology but to understand it, question it, and use it responsibly so progress helps people, not replaces them. Ethics matters because technology reflects the people who build and use it. If we ignore the moral side of AI, we risk creating tools that harm more than help. Being thoughtful now can save future generations from dealing with problems we could have prevented today. Short-term solutions can make a real difference if we begin now, while long-term actions like strong laws, worker training, and sustainability standards will protect future generations. The challenge of this century isn't creating smarter machines but raising wiser humans to guide them, ensuring AI strengthens society rather than replaces it.
Sources
