I had my moment of reality regarding generative AI in my chemistry and physics classes this past term.
I’ve always had the student frustration “I can’t Google your homework problems!” as a point of pride in my teaching. I’ve always authored my own homework problems as a scholarly outcome of my teaching, and used them for my own pedagogical needs within an open-source LMS rather than somebody else’s proprietary homework server. I’ve revised what I’ve assigned with student needs at the center, with my priority always giving students the kind of practice that would build their skills in the physical sciences and help generate a level of sophistication towards problem-solving that would serve them regardless of where their education directed them.
So I always would laugh with students who took multiple attempts to complete my homework sets or an exceptionally long time to work out one problem or another. “Do you think you’re threatening the record?” I’d jovially say, and tell tales of students gone by who would make their own attempts to game my system, submitting bull answers and getting the correct answers, reverse-engineering the correct solutions, and FINALLY getting right answers – and developing their problem-solving instinct all the while.
The message was communicated and understood by the students – the effort put into understanding the homework, in whatever terms they chose to understand the homework, was always going to be worth it. And I held up my end of the bargain. The homework problems prepared students for questions that turned up on exams. The more practice the students got, the more prepared they were for the problem-solving they had to demonstrate proficiency in. Student evaluations of teaching aren’t perfect measures of satisfaction, but I got more compliments for my system than complaints.
And this semester, all that collapsed.
Many more solutions were submitted correctly on first attempt. Many of those attempts required minutes, even seconds, on the student’s part, rather than hours or days. Despite all my attempts to persuade and maintain status quo, I knew the gig was up when a student submitted correct answers to randomly generated percent yield and limiting reagent stoichiometry problems – the historic stumbling block in developing understanding – and submitted those correct answers 68 seconds after first seeing the problems.
That student didn’t do the problems by themself, or even with the help of an expert tutor. Again, these chemical equations and mass amounts were randomly generated. Something technological had to be helping.
And that’s how I used Chat-GPT for the first time myself, throwing my own ethical and moral concerns to the site, and input a randomly generated homework problem of my own to be solved.
And I got an immaculate solution in return.
The computers can do the students’ homework for them. And not just grammatically correct (if a bit nonsensical) essays, but conceptually rigorous problems in physics and chemistry. It amounts to a sea change in how I have to assume students are approaching work that they do outside of the classroom.
And if the rewards that students receive are at a certain level for using Chat-GPT rather than doing the work that historically has been necessary to build their problem-solving skill, then students – who have always been at least a little bit cynical in how they employ their efforts – are absolutely going to take the shortcut if they won’t be caught for taking it.
It undermines, fundamentally, the trust I need to have in my students in order to make my classroom environment work.
And without that trust, what I do in the classroom just doesn’t work.
—
If it was any old technology that was undermining trust in this way, it would be bad enough.
It’s not any old technology. Generative AI is an unethical technology in any way it is sliced.
Ignore the implications for teaching and pedagogy for a moment. We already have multiple evidences that the environmental impact of generative AI usage is staggering, and there is an increasing consensus that generative AI usage is going to cause tech companies to junk their goals towards carbon-neutrality and might prove a tipping point towards missing climate change goals. It’s true that other technological usage requires the presence of data centers and also taxes the environment – and frankly, this moment might cause the reconsideration of that kind of technology usage as well. But specific generative AI activities, such as image generation, are genuinely oppressive. The prevalence of generative AI is a threat to the health of the planet.
Not only is the training of generative AI natural resource-intensive, it is human resource-intensive, and human trainers have been consistently underpaid and discounted in the development of the quality of output. An Time exposé on Kenyan worker exploitation by OpenAI is now two years old, and has not been responded to in any sort of way that provides reparation. That exploitation fits into a pattern of abuse of labor in the developing world by American tech companies, exploitation that is termed by those captured in that work “modern-day slavery”.
And this exploitation is of a piece with the respect for human labor that those who promote AI demonstrate. Generative AI is powered by large-language models and image libraries, and the text and images that allow generative AI to move forward are consistently used without permission of those who did the creative work in the first place – causing the very concept of copyright law to be under substantial question. And the tech companies that use them admit that they need copyrighted works in order to train their tools – in fact, they will freely argue in court that free access to copyrighted works is necessary to maintain the development of generative AI.
Stolen intellectual property, abuse and exploitation of labor, and accelerating environmental damage. It should be abundantly clear that there isn’t an ethical case for using generative AI. Period. The harms that the development of this technology have perpetuated and continue to perpetuate, if they were known fully and discussed freely, would horrify.
Those harms are not known fully, and they’re not discussed freely. We’ve been trained from very early ages, all of us, to be in awe of science and technology and to support its development. We’re encouraged to assume that tech lords like Bill Gates and Mark Zuckerberg and Jeff Bezos – and now, Sam Altman of OpenAI – have our best interests at heart, and the free implementation of new technology will always serve to improve day-to-day life. Damage to humans and damage to the environment is swept under the rug by people who have been trained to be respectful when the topic is technology – and, subtly, who have been trained to be unquestioning when the topic is technology.
We need to ask questions about how this technology got here. We need to ask questions about what this technology is doing to our world.
And we need to ask questions about how this technology impacts our humanity.
—
The thing that bothers me the most about generative AI is the impact it’s having on my student’s mentality.
Some of those homework assignments that couldn’t be Googled were big things. They were not easy things for my students to accomplish. So when they were accomplished, there was celebration and satisfaction in the effort. I still keep a video a couple of students from a different place sent me after a 9 points out of 10 effort on a chemical nomenclature drill-and-kill exercise. They’d worked, and they’d built instinct for naming compounds. And they sang about it on video to me. There was real joy in the accomplishment.
I still saw some of that from upper-division biochemistry students as I went through this term, as they rebuilt their instinct for solution stoichiometry and buffering, topics that were never easy for me and that I know aren’t easy topics from them.
I got none of that satisfaction from my lower-division students this term. Homework assignments were quietly completed. I got very few moments from students feeling the satisfaction of accomplishing a difficult thing. And in many cases, I know that’s because they never had that satisfaction. They simply got points from submitting answers that ChatGPT gave them.
When that belief that comes from accomplishing a big thing is realized, and internalized, it becomes self-sustaining motivation. What happens to that belief when the student realizes that a computer can solve the problem set so much more quickly and effectively than their first effort? Why do the exercise at all? What is being learned from it?
This isn’t an issue for one discipline or another, but for the entire academic enterprise. What happens to belief when a student can rely on Chat-GPT for a well-written essay? For a business plan? For a literature survey?
(No, Chat-GPT can’t do all of these things. Yet. But the genie is escaping the bottle. I didn’t believe a year ago that Chat-GPT would reach the point where someone could credibly say that it was as good as, possibly better than, a well-written general chemistry solutions manual. If the technology moves forward without restriction, it may well be just a matter of time.)
A recent study by the Swiss academic Michael Gerlich from the journal Societies about the impacts of AI usage introduces the term cognitive offloading. The question begged by that phrase is simple: when people turn to generative AI to complete complex tasks, and thereby give over the work of those tasks to a computer, how does that impact their capacity to critically engage with the world around them?
That study built out the hypothesis that the more somebody uses AI tools, the weaker their critical thinking skills get. There was substantial evidence gathered in the study gathering information about participants’ use of AI tools and both self-assessed and systematically assessed critical thinking skills. All of the evidence gathered supported the idea that people used AI to intentionally give up tasks that required serious thought, and those people’s critical thinking suffered as a result.
The thing that makes me worry is how much these people are giving up the very mental tasks that make them human. The feedback this study received from its participants brought forward observations that marveled at the time savings AI was providing them, and that actively worried about the loss of their capacity to respond to the world around them. Students worried that they were losing something very fundamental as well as they handed over their problem-solving to an algorithm.
It occurs to me that there are a lot of us in STEM fields who need to be thinking about our fields not as technical disciplines where problems need to be solved by any means at the students’ disposal, but as liberal arts where the path the student takes to the solution is as important as the solution itself. If only the answer matters, then absolutely we should be employing technology in any way we can. We need to take the attitude that it isn’t just the answer that matters.
And the very nature of the technology itself is the evidence that the answer isn’t the important thing. How the answer is obtained, how much damage is done by the technology doing the answering, how much does the student lose by not exercising the tools they have to do the answering?
When everything is said and done, what are we telling our students about their humanity? What messages do we send about how they are developing their minds to engage the world around them?
I wish I had every answer for this moment we find ourselves in. The work ahead of those of us in the education business isn’t going to be gentle or affirming. In the interest of affirming the humanity of the students who study with us, we’re going to have to take on some incredibly stiff headwinds – the headwinds of a technology industry, and increasingly headwinds in the broader culture, that say that resisting the advance of this technology is futile, and that the development of this technology will ultimately benefit the world as a whole.
But I am wholly convinced that if a technology undermines my students’ humanity, that technology isn’t ethical to use, for any reason. And I believe generative AI undermines my students’ humanity.
It is beyond time to ask questions about how we’ve given into technology, and to be loud and forceful about those questions, for the benefit of all of us.
—
Thanks to Maha Bali for her framing of the harms of AI (see critique #5 in this blogpost); that framing turned out to be the framing I used in this specific argument. Maha’s blog is perpetually important reading (and reflecting!) on all kinds of issues surrounding technology and education, and I cannot recommend her work enough.
Thanks also to Autumm Caines for her influential writing, and to my colleagues Susan Monteleone and Walter Wimberly for helpful conversations.
The cover image comes from piqsels.com and – in case it needs to be said in 2025 – not from an AI image generator.
















