
Individual professors sets their own guidelines for use of AI
Some Hillsdale professors ban the use of artificial intelligence while others demand it, two years after the provost announced that instructors would determine individually what works best in their classrooms.
“The policy of the college is that students must do their own work so that the larger goal can be served,” College President Larry Arnn told The Collegian. “The larger goal is for the students to grow into excellent people, friends, citizens, husbands, wives, thinkers, worshippers.”
Arnn said the college is considering and will consider whatever policies are necessary to serve this goal.
“To that they must pay the price of time and effort to put things into their souls that are hard but wonderful to understand,” he said. “They must suffer, as we all must. It is a joyous suffering. We must not let AI take that away. AI cannot be human, but we must be.”
The provost’s office encouraged professors, beginning in 2023, to discuss the proper use of artificial intelligence with students and to include statements in their syllabi outlining academic honesty policies. Rather than issuing a blanket policy on the use of AI in the classroom, the college allowed professors to determine the appropriate parameters for the use of artificial intelligence tools like large language models, or LLMs, in their classes. That approach has not changed since 2023, Provost Christopher VanOrman confirmed.
Associate Professor of Philosophy Blake McAllister said he appreciates the college’s faculty-driven approach thus far.
“It shows respect for the professors and trust in us to be able to manage our own classrooms and courses, and I think generally that’s the best policy,” McAllister said.
The Humanities
The English department adopted a zero-tolerance AI policy in 2023 after every professor in the department agreed that any student using an LLM to write a literary analysis paper would be harming his or her education, Chair and Professor of English Justin Jackson said.
“It’s really that simple. So we all agreed to say ‘no.’ I wish it were more complicated, but it’s not,” Jackson said. “Maybe another way of asking this question is this: ‘In what way does your discipline allow for the flourishing of undergraduate education by the use of AI?’ We couldn’t find any reason except by way of exceptions, anecdotal, lived reasons. But we decided the exceptions do not make the rules. Every professor could have decided otherwise. Heck, even now, any professor in our department could use another system. But I think we’re all on board that the use of AI in literary studies is largely harmful to the life of the mind.”
The history department also opted for a no-tolerance policy, according to Chairman and Associate Professor of History Korey Maas. Any use of LLMs in brainstorming, writing, editing, or test-taking in history classes is a violation of the college’s academic honesty policy, he said.
“It’s understood that individual professors may, of course, make exceptions they might deem appropriate,” Maas said. “It’s also understood that we’ll likely need to rethink even this blanket approach in light of rapid changes, for example, even basic Google searches automatically returning AI-generated content.”
McAllister said he has found AI tools to be helpful in teaching philosophy classes and, to a lesser degree, in research. An LLM can give him a succinct overview of unfamiliar areas in philosophy and can help answer questions that would otherwise require hours of combing through secondary sources, he said.
AI cannot replace thinking, McAllister said. But he allows his students to use AI as a preliminary tool in research and brainstorming in the same way they would consult another person or a secondary source.
“Whatever you learn from AI needs to result in an internalized understanding of the material, out of which you produce whatever written or oral performance you’re turning in as an assignment,” he said.
McAllister said he assumes the best of his students in that they will use AI in an honest way. But as a guardrail, he said he reserves the right to call any student into his office for any reason and ask them to give a defense of the arguments in his or her paper.
LLMs can be useful to students for tasks like assessing the reputability of scholarly articles, quizzing themselves on reading material, and brainstorming paper ideas, according to Professor of Philosophy Ian Church.
“What I try to tell students to do is to imagine the large language model as a bit like a quasi-omnipotent roommate who will occasionally lie to them,” Church said.
Brainstorming with a roommate is fine, he said. Asking him to write your paper for you is not.
“I still want it to be your ideas,” Church said. “I want you to be thinking through these things in concrete ways.”
Church said he is toying with the idea of creating an LLM trained on the core documents in the “Western Philosophical Tradition” course that students could then use as a resource for the core philosophy class.
“Again, if you are just relying on it to just tell you what you need to know, that’s not really beneficial,” he said. “But I think it can be trained to help facilitate understanding.”
The Social Sciences
Use of AI verges on mandatory for some economics, business, and accounting classes, according to Associate Professor of Economics Charles Steele.
He said he includes an artificial intelligence statement in his syllabi that states the use of AI for brainstorming, composition, researching, and editing is not mandatory but is acceptable.
“Note that in the 21st century, learning to use and control AI will be an essential part of your intellectual growth,” the statement says.
Instructor in Accounting Deanna Mackie said she still makes her accounting students analyze data without AI assistance in her “Accounting Information Systems” class. Even if they learn to use AI later on, students will need to know how to prompt AI correctly and identify errors.
“Students need a solid grasp of the accounting concepts to be able to recognize if the results from AI are accurate and complete,” Mackie said. “In the next few years I do see expanding the accounting information systems course to include AI analysis, but it will not change the need for students to understand the systems and data structures that AI is analyzing.”
The use of AI is required in the economics department’s “Practical Data Seminar,” a course aimed at teaching economics students the technical skills they need to function in the field professionally.
The class teaches students to program and write code in order to work with data and conduct economic analysis, Lecturer in Economics Eric Ragan said.
“That’s where AI really comes into our class. We use it as a tool to help facilitate the learning process for programming and learning to write code,” Ragan said. “Our basic perspective is that we think in 2025 if you’re learning to code and program that AI is such a potentially valuable tool that you would almost be foolish not to use it.”
Students are encouraged to upload the code they write to ChatGPT to help identify errors. Students must write the code itself, but the chatbot identifies errors and cuts down on the time a student would otherwise use surfing online forums to answer a coding problem.
“Spending 20 minutes searching Google to answer a coding problem isn’t making you better at coding,” Ragan said. “Maybe it builds some perseverance, but you’re not actually learning or improving your coding skills.”
ChatGPT expedites this learning process, he said, and allows students to spend more time doing hard things within the code itself.
Students are required to keep a log of conversations with chatbots as a guardrail against cheating, Ragan said. The logs also helps professors to understand how students are interacting with the chatbot and where it can be helpful.
Steele said he asks his economics students to cite any use of AI for his class so he is aware where they have used its assistance.
“It is impossible to enforce a ‘no-AI’ zero-tolerance policy, and practically impossible for a student to obey one,” Steele said. “I am not sure one can even use a computer today without having AI involved. It is professional malpractice to set rules one cannot possibly enforce; all this does is incentivize violations and punish those who don’t violate the rules.”
Steele said one of his former students, Ian Schlagel ’24, took him up on his offer and used ChatGPT to help him write an essay answer for a final. He documented the process as he went.
“Honestly, it was a pretty good answer,” Steele said. “I also think it was the worst answer that I got out of the entire class.”
Most of his students choose not to use AI or limit its use to outlining and proofreading, Steele said.
“I understand why those who are teaching writing and how to be a better writer do not want students turning their writing assignments over to ChatGPT. That makes sense,” Steele said. “But simply saying ‘all use of AI is cheating’ doesn’t make sense. Aristotle’s concept of practical wisdom and a ‘golden mean’ seems appropriate. And, as Aristotle teaches us, it will take work to get this right.”
The Hard Sciences
Assistant Professor of Computer Science Oliver Serang said he does not allow students to use LLMs in his “Intro to Coding” class because he wants them to learn the basics on their own. Most of his assignments are AI-proof, he said.
“I want people to be able to do it if they’re scratching it in sand with a stick,” Serang said. “You should be able to do it without any help.”
LLMs become useful in more advanced computer science work, either to make repetitive tasks more efficient or as an advanced search engine to find technical help for coding problems, according to Serang.
“You don’t want it to think for you,” Serang said. “It’s just not going to be as good, and also it’ll rob you of some of the joy.”
His favorite way to use LLMs, he said, is as a strategy planner to help find the best way to approach a certain problem. In this way, it is similar to going back and forth with a friend.
“There, it’s actually teaching you,” Serang said.
Assistant Professor of Physics Michael Tripepi said he does not allow students to use AI tools in his classes. The threats AI poses in terms of cheating in his discipline are not new, he said.
“We already had this problem with homework answers being available online through websites and solution manuals,” Tripepi said. “If anything it just reinforces what I try to tell my students in my classes, that the important thing is you understand the concepts and develop insight and understanding of the material.”
AI technology is still new, he said, and it is not yet clear where science professors may be able to incorporate it in student work in a helpful way.
The Liberal Arts Framework
Church said he does not see AI as a threat to true liberal arts education.
“If anything, LLMs are only a threat to education models that are primarily about memorizing and delivering facts, because that’s the kind of thing LLMs can easily handle,” Church said. “But the heart of a liberal arts education, as I see it, is about that in-person, human-centered learning that goes beyond facts — it’s about critical thinking, the shaping of character, creativity, humane learning, and those rich discussions that AI can actually help enhance rather than replace. So I think AI can actually usher in a new era for the liberal arts by supporting those deeper, more meaningful parts of our education.”
The conversations those in higher education need to have, Church said, are about how to use AI tools to help cultivate intellectual and moral virtue. As professors become more AI literate, they will be better able to instruct students to use the tools well.
“If you are just using it to replace your mind, then your mind is going to atrophy,” Church said. “But if you use it as a tool to strengthen your mind, I think that can be a different story. And so we have to talk about what a virtuous use of these tools is going to look like.”
Steele said Hillsdale should be a leader in the ethical use of AI in fulfilment of its mission to provide a literary, scientific, and theological education outstanding among American colleges.
“We are now a national, if not international leader, in education,” Steele said. “We ought to be pioneering the way to figure out how to use this properly.”
![]()