Precautions regarding the use of artificial intelligence have been left to Hillsdale science professors’ discretion. Some have introduced more in-class writing while others have included statements in their syllabi addressing AI usage.
Chairwoman and Associate Professor of Chemistry Courtney Meyet said the chemistry department has no policy statement yet on AI, but some professors are adjusting their assignments to make cheating more difficult.
“Many of our faculty members have dropped our normal writing requirement or essay requirement for CHM-101 and have adopted different assignments,” Meyet said. “I will still include some writing assignments, but I’ll design them in a way where I will know that they will not be able to use ChatGPT effectively. They have to show evidence of their research, specific sources they used, and actually show me that they know how to use them.”
Associate Professor of Biology Andrew Russell said he provides daily reading questions for his students to hand write. This doesn’t eliminate the problem of AI usage, but it creates another obstacle for the students.
“The only real change that I’ve made is just incorporating an additional statement in the syllabus that explicitly states that using AI to generate something is considered plagiarism,” Russell said.
Concerns regarding ChatGPT usage in chemistry are smaller because there is less emphasis on actual writing, according to Meyet.
“I don’t think we have to worry about it as much, we do have lab reports and more writing for when our upperclassmen submit their theses,” Meyet said. “But we are working one-on-one with them on that thesis the whole way through, so it would be unlikely but noticeable if they did use it.”
Russell also said use of AI programs, such as ChatGPT, is not a significant concern in his department because of the lack of writing involved in science.
“I will incorporate essays on tests, but that’s in-class writing where students don’t have an opportunity to look something up and generate it through AI,” Russell said.
Though it does raise concern regarding academic honesty, the scientific advancement that AI offers is undeniable, according to Russell.
“One of the limits to the scientific method is that we can only test the hypotheses that we actually think of,” Russell said. “If you have AI, that can actually help to design experiments and generate ideas that we haven’t even thought of yet. Then, the only limitation is the algorithms that it uses.”
Meyet said rather than ignoring AI, students should be taught how to use it as a tool.
“We are aware that the quantity of students using it may increase with time and we shouldn’t ignore the fact that it is being used by students,” Meyet said. “It is something that we are going to have to learn to live with and rather than fight it we should learn how to work with it. However, we should be careful to not neglect our own curiosity and ability to think independently.”
However, according to Assistant Professor of Physics Michael Tripepi, the usefulness of AI is limited to only explaining what, not how, things work.
“What we are interested in with physics is really understanding how the natural world works, how you have a physical system in front of you, and how it operates,” Tripepi said. “AI is really good at predicting what a system will do or giving a function that tells how a system will evolve but fails at really explaining how that works.”
Rather than encouraging independent development, AI causes the frequent user to depend on its responses, according to Tripepi.
“My biggest concern is that we will let AI take over our lives where we will no longer be active individuals just letting the computer decide,” Tripepi said. “We learn to let something else decide for us rather than us being agents and participants in making our decisions.”
![]()
