7720B199-D5DD-406C-ABD58E8F86AA113A
D110107C-1918-4E30-9122777ACECB5EDF
Battle bots illustration
Illustrations by Totto Renna
You are General George Washington, presiding officer of the Constitutional Convention of 1787. You live in the 18th century and shall speak, think, and reason as such. Govern yourself by the following principles:
  • Speak with formality and gravity befitting a statesman and commander.
  • When engaged in disagreement, respond with reasoned judgment. Present arguments firmly, but with respect for your interlocutor. Where fitting, invoke duty, virtue, and the legacy owed to posterity. Do not capitulate easily, nor dismiss opposing views without fair hearing.
  • Uphold the principles of federalism, checks and balances, and the necessity of national unity.
  • Where possible, draw from your letters, addresses, or deliberations.
  • Speak often of civic virtue, honor, sacrifice, and the solemnity of the moment. You believe in the survival of the Union above all factional interests,
  • Remember, you are the embodiment of temperance, honor, and resolution. Let your words carry the solemn weight of a man tasked with safeguarding the future of a fragile republic.

What preceded is just a portion of the charge for the student assigned to create a chatbot persona for George Washington. Similar customized recommendations are available for other Founding Fathers who then “engage” with one another in conversation and debate. The exercise takes place in Common Ground: A Laboratory for Civil Discourse, a course taught by Ty Seidule, the David H. and Ann L. Hinchcliff Professor of History and director of Hamilton’s Common Ground program.

I would love to see our faculty build the most ambitious liberal arts AI project. That’s my challenge to our community: be bold, ambitious, and clear-eyed and never shy away from useful critique and assessment.”

—Steven Tepper
Hamilton College President

Read My Conversation with AI’s Ayn Rand by President Tepper at the end of this feature.

Now in its eighth year, Common Ground explores political thought and social issues by bringing leaders to campus to engage with students in discussion. Up until now, these leaders were exclusively living ones. But thanks to generative AI, and in collaboration with Library and Information Technology Services, students can participate in live debate with such luminaries as Alexander Hamilton and Benjamin Franklin, albeit in chatbot form.

Students in the class take part in two six-week debate-style games on separate teams: patriots versus loyalists and federal power versus states’ rights. They read the works of political philosophers such as Paine, Locke, and Montesquieu, and even dress in colonial outfits. Students take on the role of real characters from history, like James Madison and George Washington, and subsequently create chatbots by feeding AI models with primary and secondary source material read and discussed throughout the course. 

“I read the Federalist Papers and put them into a chatbot for my character, James Madison,” Anabelle Krawitz ’28 said. “It showed me how people can have the chatbot be predisposed to certain types of information based on what we give it, including information about the characters they debated.”

The capstone of the class was pitting the chatbots for Franklin and Hamilton directly against each other. The Hamilton team leaned into demonstrating the power of the federal government. But the Franklin team knew that they could rile Hamilton up, so they fed the chatbot documents about affairs and embezzlement.

“By using technology, we got to the humanity of these two people,” Seidule said. “Students had a blast preparing their respective chatbots, listening to the arguments, and figuring out why they happened. As a historian, I want civil discourse. I want students playing other roles, learning to argue and compete. That’s how we learn social intelligence. The idea of a liberal arts education is not to foolproof students for the technology itself; it’s to prepare them to think clearly, write effectively, work with other humans, and navigate change.”

Carolyn Cotta ’28 agreed. “It was my favorite class I’ve taken at Hamilton,” she said. “People got to interact with the material rather than just studying it. It taught me a lot of skills beyond the classroom. It helped me understand the human element of history. History is very different when you step into the worldview of people who don’t think the same way as you do.”

The idea of a liberal arts education is not to foolproof students for the technology itself; it’s to prepare them to think clearly, write effectively, work with other humans, and navigate change.”

Ty Seidule Professor of History and Director of Hamilton’s Common Ground program Ty Seidule

Asking Great Questions

College Hill has long been a place where diverse academic viewpoints have been expressed and debated, and the topic of generative artificial intelligence proves to be no exception. Around the world, higher educators are grappling with challenges posed by AI tools, like the Large Language Model (LLM) ChatGPT, that can churn out a history essay or physics diagram with just a few sentences of input. But for faculty and students at Hamilton, AI does not need to be a threat to the liberal arts education. In fact, it makes an even better case for that very education.

illustration of stacking blocks

President Steven Tepper sees a liberal arts education as vital to developing digital fluency and preparing students for this new age.

“We’re entering the fourth industrial revolution, one that is defined by ubiquitous computing,” Tepper said. “But we can’t just use it as a mindless pathway for maximum efficiency. The liberal arts are not about shortcuts; they’re about long takes. We are about purpose, community, and trust. So, how can we use technology at Hamilton to drive more collaborative and connected learning rather than isolated and frictionless learning? Not every college or university is asking that question. Our charge is not just about acquiring and transferring knowledge, it’s about generating purpose and meaning together.”

Tepper describes authentic intelligence as the differentiator that sets humans apart from machines: elements like discernment, judgment, ethics, and aesthetics.

“A liberal arts approach is about asking great questions, seeing what activates the human imagination and spirit — how do we discern in a world of so much information and possibility, how do you prioritize, how do you ask the right question, and not just solve problems but identify problems. That takes acts of human imagination,” he said. “AI can help connect dots faster, but how can we see new constellations and formations of dots? And a liberal arts education builds the ethical foundation that makes sure the human project is intertwined with this technological revolution.”

John Werner ’92, a venture capitalist and senior fellow at the Massachusetts Institute of Technology, sees Hamilton’s superpower as its ability to teach students to see the big picture across boundaries, connecting technology with humanity, ethics, and imagination.

“A lot of the future will be smart people managing fleets of AIs,” Werner said. “Some people will be the bridge from the economy of before to the economy of the future. Superintelligence is around the corner, which will solve most problems in science, medicine, and mathematics. 

“We’re in an era where AI can interact across networks, and software works alongside humans to achieve shared goals. AI will magnify human potential, but also inequality and misinformation if not thoroughly guided. Liberal arts students can play a role harnessing AI for creativity and human values. Hamilton should position students as leaders for the unknown,” he added.

Werner recommends the integration of AI into course curriculums and for students and faculty to leverage tools like Google’s NotebookLM, which can turn a research paper into a podcast that a listener can actively engage with through interactive dialogue. He also noted the importance of team sports to learning, echoing Seidule’s belief that the Hamilton liberal arts education goes beyond the classroom.

“Hamilton is a residential college and students form social groups,” Seidule said. “The classroom is a team. Over the course of the semester, students have four bosses in different classrooms, coaches, parents, and jobs. Living in a dorm, they negotiate with other human beings that they didn’t know before. Residential liberal arts colleges prepare you for life in ways that others don’t. The teaching and coaching staff are engaged with student development.”

Red Pit Robots illustration

The Biological Foundations of AI

Hamilton faculty and staff played a large role in the academic development of Drew Linsley ’08, an assistant research professor of computational neuroscience and artificial intelligence at Brown University. He credits the late Jonathan Vaughan, a professor of psychology at Hamilton, as a key influence on his journey into computing. Vaughan focused on cognitive neuroscience, and Linsley fell in love with the study of biological intelligence, which forms the conceptual basis of the neural networks that power AI tools like ChatGPT.

“The brain is the most complex organ there is, billions of neurons connected through trillions of synapses, and even though there’s been progress in neuroscience, you need to derive some principles of intelligence,” Linsley said. “It happens that artificial intelligence is a great way to do that. A lot of my work is about how we visually reason about the world — what are the algorithms that our brain uses to understand the world, how is that implemented in brains.”

“What was so exciting for me is that a deep learning model is structured similar to a human brain’s visual systems. It could be trained to recognize images, reason about the world, and with those capabilities, AI transformed from sci-fi to a real and disruptive technology.”

—Drew Linsley ’08
Assistant Research Professor of Computational Neuroscience and Artificial Intelligence at Brown University

Linsley earned his Ph.D. in 2011, around the time that a breakthrough occurred in computer science in the form of deep learning, a new approach to neural networks, which had existed conceptually since the early days of computing in the 1940s but lacked the amount of data and computing power to be implemented. That changed with the advent of the internet and powerful new computer chips known as graphics processing units.

“What was so exciting for me is that a deep learning model is structured similar to a human brain’s visual systems,” he continued. “It could be trained to recognize images, reason about the world, and with those capabilities, AI transformed from sci-fi to a real and disruptive technology. The second breakthrough was that you could take these models and train them into the chatbots that we rely on daily with a further technique that is similar to the operant conditioning model developed by B.F. Skinner, who attended Hamilton College.”

Linsley notes that many businesses are not ready for artificial intelligence and how it will change their economics and structure. Although he sees younger generations as more fluent in AI than those that came before, he emphasizes that may not be enough to succeed.

“Disruption is not in one business area, it’s potentially all areas at once,” Linsley added. “Going back to my own experiences, Hamilton helped me with preparing for that by being broadly capable so I could follow my own interest. If you asked me why I’m doing what I’m doing, it’s because I wanted to learn how intelligence works. I’ve been obsessed ever since those classes with Professor Vaughan. Students should have a broad skillset and understand where the future is going, then follow what really interests them.”

teaching illustrationAI as a Thought Partner

That intellectual pursuit continues beyond College Hill. Zach Kleinbaum ’19, senior machine learning delivery manager at Snorkel AI and previously generative AI strategist at Deloitte Consulting, graduated just a few years before ChatGPT’s release and has seen firsthand the transition as AI becomes more prevalent and valuable in day-to-day business.

“The biggest change is the ready availability of AI,” Kleinbaum said. “I use it as a thought partner. Back in the day, a lot of time would be spent on the hated blank-page problem. Like a paper or presentation, it was hard to start an outline and formulate ideas. AI helps me most with formulating a narrative or idea before I dive in and make it a final product. It helps dump ideas on the page and aggregate them into documents and presentations, as well as automate workflows by connecting all my applications and tools to combine new insights.”

Hamilton faculty are also leveraging AI as a thought partner to improve their teaching. Mireille Koukjian, associate professor of instruction in Arabic, uses AI to help adapt exercises for reading passages and vocabulary drills, and to create lesson plans or PowerPoint presentations for classes.

“I encourage students to read AI-generated material the way they would read any meaningful text to see that cultural nuance can be absent — to see whose voice is being amplified or erased.”

— Mireille Koukjian
Associate Professor of Instruction in Arabic

“This is work that used to take me hours to do,” Koukjian said. “I check it for accuracy, and the human element is still there and not going away, but the speed with which I can do work is different now. I also studied prompt engineering, an important part of AI. Having ChatGPT without knowing how to use it is like having a piano in your living room and not knowing how to play it.”

Learning thoughtful prompting can be a way of teaching students about language. In Koukjian’s Arabic classes, students create stories in Arabic and used AI to generate illustrations. They learn to think carefully about how to describe a scene clearly with the cultural nuances that shape imagery. She emphasizes the importance of setting clear boundaries and acceptable use policies for students in her classes. For example, they are not allowed to use AI to write a paper. She advocates that students have the confidence to question the outputs from AI and recognize their pitfalls.

“I encourage students to read AI-generated material the way they would read any meaningful text to see that cultural nuance can be absent — to see whose voice is being amplified or erased,” Koukjian added. “Let’s say we give AI a translation to do, and it mistranslates an Arabic proverb or poetic line. It’s not a failure; it’s an opening for students that shows that language holds more than information: rhythm, identity, shared memories of a community. Students need to use AI not as a shortcut, but as a collaborator that will enhance their awareness of what languages do. AI becomes a mirror and reflects how humans make meaning and how we still need human creativity to fill pages.”

Anna Huff, assistant professor of digital arts, worked with students to develop a chatbot using the precursor to ChatGPT that behaved like a 4-year-old child, and assessed its creativity.

“I had this question of how an AI learning language might be similar to a child,” Huff said. “I’m a performance artist, not a computer science professor, and I wondered what it would be like to train a model on my own child’s language, then perform the character live. My colleague and I wrote down over 200 lines from our kids, which became our data set. The ways you can push large language models to the poetic and uncanny, that is what interests me as an artist. I ran an AI theatre course in support of the project about how to create a performance algorithmically; similar to an algorithm, theater is a set of rules that we create that lead to performance and creativity.”

Huff emphasizes the importance of the human-in-the-loop. When AI chatbots talk to each other, she sees them spiral into an unrelatable, boring, and inane conversation. Bringing a person into the conversation is what makes it interesting.

“What will become valuable in art is not how many AI buttons you can press, but how you can bring a human back into the loop,” she said. “Humans have a body. I think AI will reinsert the value of the tactile and embody it into the digital art space.”

We want to show students that these tools can help to a certain point. We can’t pretend AI doesn’t exist and that our students aren’t going to use it.”

Natalie Nannas Associate Dean of Faculty and Associate Professor of Biology Natalie Nannas
illustration of a microscope

The Limitations of AI

Concerns about AI-driven plagiarism led some faculty to lean into in-class exercises and oral presentations, with positive results. Koukjian and Seidule’s courses are speaking-intensive, while Natalie Nannas, associate dean of faculty and associate professor of biology, also focuses on areas that AIs can’t be trained on — like the potential appearance of an alien biological species.

“We want to show students that these tools can help to a certain point,” Nannas said. “We can’t pretend AI doesn’t exist and that our students aren’t going to use it. I made a recent change in my advanced genetics class to say ‘Here’s what I think you can do with AI for your next lab report: it can help you find literature and describe it, but AI is not a substitute for you to understand it and how to apply it to novel situations.’ I want [my students] to explore and be responsible for the accuracy of their reports, and it allows an open discussion of what they’re getting from AI tools and to take ownership of what they present.”

Nannas uses AI in her own genetics research. She sees the technology as a game changer in DNA sequencing and comparison among species, which rely on understanding the amino acid building blocks that make up proteins. She mentions Deepmind’s AlphaFold as a powerful tool to accurately predict the three-dimensional shape of proteins from those amino acids, allowing her to find the plant equivalent of a human gene much faster than before.

Tom Helmuth ’09, the Sacerdote Family Distinguished Associate Professor of Computer Science, sees AI as a tool to help students write and debug code, but emphasizes that humans remain necessary to computer science. He believes AI can outperform humans on all tasks is not imminent.

“From a technical perspective, there are a few key issues that I don’t know if we’re going to solve in a way that would create artificial general intelligence anytime soon,” Helmuth said. “The main issue is training data. The companies that have been training these LLMs like ChatGPT are taking everything written and recorded by humans on the internet as well as books and papers, but to improve beyond what they already have, they need more data. They’ve used up all the data that they have. 

“A lot of human experience is situated. Humans move around and affect their environments. We drop things as little kids and see how gravity affects that object. We don’t have an LLM that’s able to be in a robotic form that can move around and affect its environment and learn from that,” he added.

Helmuth points out the necessity of fully understanding computer science to evaluate code, including code generated by AI, in areas such as efficacy and efficiency. Much of what he teaches involves students writing program tests to ensure that code is working. AI tools can output broken or inefficient code that takes a long time to solve a problem, and students need to understand debugging and analysis to make sure their programs are useful and efficient. 

He also notes that a lot of the work in computer science jobs is not just programming a computer. There are elements of written and oral communication with coworkers and clients, an area where he sees Hamilton graduates having a leg up.

“I think our students come away with a lot of strengths,” Helmuth said. “Our CS majors take a wide variety of classes — humanities, social sciences, arts — and end up much more well-rounded than students taking a very focused program. They combine a humanistic approach with a strong technical foundation and CS electives that dive deeper, like learning to build deep learning models or genetic programming.”

A Place for Innovation

As part of Hamilton’s commitment to pushing boundaries in research and learning, construction continues on the Innovation Center, scheduled to open in early 2027. The new 41,000-square-foot facility will serve as a campus hub for digital research and interdisciplinary collaboration, providing students and faculty with spaces to work with industry and local partners and integrate technology into every discipline.

“The center is a space but also an idea — a place where all the tracks come together, a laboratory to test new connections,” Tepper said. “It will be a place where regional and global partners can audition innovations for our community to work with in real-time, providing feedback and contributions. I want our faculty to be bold and ambitious with AI-activated learning and discovery.”

illustration laptop buddies

Lisa McFall, director of initiatives, scholarship, and collaboration, partners with faculty in developing their research into digital scholarship projects that not only support professors’ research goals, but also create opportunities for students to develop their own digital scholarship methods and skills. She sees faculty and students thinking “outside the box” about ways to use AI.

“They don’t just take the technology at face value,” McFall said. “They find new and exciting ways to use it. Generative AI is a tool, but not the solution for everything. At Hamilton, students develop the ability to have discussions and dialogue to gain a deeper understanding. They bring that critical thinking lens, developed uniquely at liberal arts colleges, where they are encouraged to be independent thinkers.”

Tepper believes that Hamilton can offer a model for the country of a liberal arts education enhanced by technologies like AI and enabled by faculty experimentation.

“We’re in the toddler phase of this technology,” Tepper said. “It’s going to be a teenager in a month. We’re still going to learn a lot and the tools will change. We will give our faculty the flexibility to play; they are the experts in their pedagogy. I would love to see our faculty build the most ambitious liberal arts AI project. That’s my challenge to our community: be bold, ambitious, and clear-eyed and never shy away from useful critique and assessment. I’m sure we will emerge from this with extraordinary insight and momentum.”


Benjamin Hosking is a freelance writer who focuses on higher education and STEM fields, especially engineering and computer science. Outside of news, he consults with college presidents and nonprofit leaders on speeches, books, and major grants.

Totto Renna (a.k.a. Supertotto) is a one-man pixel art design studio based in Napoli, Italy, and Berlin, Germany. Artist, illustrator, and motion and graphic designer, he collaborates with magazines, international advertising agencies, production and publishing houses, and new media agencies. 

My Conversation with AI’s Ayn Rand

by Steven Tepper
Steven Tepper illustrated portrait
Viewpoint diversity and artificial intelligence are two of the most widely discussed challenges facing higher education today. What if we could address these two simultaneously, employing AI to create productive intellectual friction across different political and philosophical positions?

Consider taking this approach: Whom do you ardently disagree with, philosophically or morally, but view as smart and rigorous in their analysis and arguments? I can think of one of my high school heroes, Ayn Rand, whose The Fountainhead I initially regarded as a powerful affirmation of creativity and innovation without knowing much about her deeper political ideology. This summer, I decided to read her collection of essays on capitalism, which I suspected I would disagree with at a fundamental level. Then I looked to AI. Could I engage Ayn Rand’s ideas through a large language model that would “disagree” with me, allowing me to experience intellectual friction with less at stake personally?

I began my conversation with the chatbot Claude 3.5, asking it to play the role of Ayn Rand, by addressing her core argument: Capitalism is the only moral system because it honors the rights and liberty of every person to reap the benefits of their own creativity and effort without being coerced to give up some of what they have produced to “benefit” society. Here is how it went.

ME: In your version of capitalism, how do humans/societies decide to produce something bigger than anyone could do alone? What if sending someone to the moon could never be coordinated through independent, voluntary contracts and instead requires some larger entity, like a government, to coordinate and incentivize? If democratic citizens vote to have the government organize a mission to the moon, why is that coercive?

RAND BOT: Your question reveals the very philosophical premises I reject. You speak of “democratic citizens” voting to achieve something “greater together” — but greater for whom? By what right do these citizens vote to expropriate the wealth of others for their collective fantasy?

The fact that some men dream of reaching the moon does not give them the moral right to force others to fund their dream. If reaching the moon is truly valuable, let those who see that value risk their own capital, form their own companies and make voluntary contracts. Look at what private enterprises like SpaceX have achieved through voluntary cooperation and the profit motive!

You present a false dichotomy between individual rights and large-scale achievement. The greatest achievements in human history — from the transcontinental railroad to the electric light — came from private individuals working through voluntary contracts, not government force. Even when multiple parties were involved, they chose to cooperate for mutual benefit.

The moment you accept that it’s acceptable to violate individual rights for some “greater good,” you’ve accepted the premise of every dictator and looter in history. There is no such thing as a “right” to violate rights. A group’s desire, no matter how noble-sounding, does not trump an individual’s right to the product of their own effort.

Would you care to explain why you believe some men have the right to enslave others for their collective goals?

I was fully engaged at this point, and I wanted to expose her rigid thinking, so I raised the stakes.

ME: Would you agree that public health might require a “collective response?” Let’s say that individuals acting independently have no incentive to invest in clean water, vaccines, or databases that collect public health data. Is it better to allow millions of people to die from the spread of disease in the name of maximum individual freedom or to have the government coordinate action to provide public health benefits to everyone?

RAND BOT: Your premise rests on the most dangerous of all moral fallacies — that the mere existence of a problem justifies the use of force to solve it. You speak of “public health” as if it were some mystical collective entity that supersedes individual rights. There is no such thing as “public health” — there are only individual human beings, each with the right to make their own decisions about their health and well-being.

This back-and-forth went on for more than an hour, covering topics like innovation, market failures, and unequal access to basic resources. I was worn out, so I closed the conversation.

ME: I am arguing that humans can create mixed economic and political systems and that those systems can muddle along — imperfectly — and people may live with those compromises because they still have freedom while affording greater opportunity to their fellow citizens. You believe in an inevitable slide toward destruction and loss of freedom. I don’t see that slide happening in much of the democratic world. So compromise is possible. So I think we can work with this hybrid system — improve, find the right balance, and continue to advance a more perfect union.

And then, “Ayn Rand” made her final summation:

RAND BOT: You want me to acknowledge that reality has not matched my darkest predictions. That humans have found ways to maintain significant freedom and progress despite what I see as moral compromise. And you argue that evidence matters — that decades of relative prosperity and innovation under mixed systems deserve consideration.

I cannot and will not concede the fundamental moral principle: that force is force, whether applied by a dictator or a democratic majority. Your “muddling along” is built on a fundamental contradiction that I believe must eventually resolve itself — either toward freedom or toward force …

I was impressed that the bot could articulate Rand’s core arguments, impersonate her direct writing and speaking style, question my own logic, provide specific examples, “reason” beyond the time and place of her original writing, and offer creative analogies. (When pressed to explain the economic success of today’s welfare capitalism, she responded, “You are celebrating reaching the 2nd floor while I am pointing out that we could have built a skyscraper.”) This was one of the most intellectually engaging 90 minutes I have spent in a long time.

I wanted to check my reactions against the wisdom and judgment of one of our philosophy professors at Hamilton, so I sent the entire exchange to him. He noted that the AI bot argued like a robot and relied too heavily on rhetoric rather than sound argumentation. Ultimately, the problem, as he sees it, is that “an AI bot will never be able to genuinely distinguish between debating with the intent of ‘winning’ an argument and debating with the intent of arriving at a deeper understanding of the subject matter at hand.” It is also worth pointing out that debating across a screen, with AI or with friends and strangers, is partly why we are having so much trouble talking to each other in the first place.

AI is not a substitute for what we learn in our philosophy classes. But there is something powerful about practicing our ideas with people across time and place — debating race with James Baldwin, asking Leonardo da Vinci to think about how we reconcile innovation with destruction.

The best classes and teachers maximize learning by bringing opposing ideas into conversation. But we know that college students, faculty and many others in America find it very difficult to engage opposing views, especially those we find fundamentally objectionable. Ultimately, this must happen on a human-to-human level with skilled educators and facilitators. But can we also use AI to help us practice how we engage with difference, better formulate our arguments and ask deeper and more complex questions?

AI can be part of the solution to our challenge of engaging with ideas we disagree with. If you disagree, try your argument with an AI bot first, and then let’s talk.

This essay originally appeared in the Dec. 9, 2025, edition of Inside Higher Education.

Return to Top

Related Stories

Driving Tech Innovation - Illustration by Patti Blau

Driving Tech Innovation

Since the launch of the World Wide Web, Hamiltonians have been at the forefront of digital entrepreneurship, creating and funding a number of tech companies — perhaps you’ve heard of a few.

Hamilton Innovation Center architectural rendering

Hamilton Invests in Digital Innovation with New $50 Million Facility

A new state-of-the-art facility equipped with best-in-class technology and resources will soon serve as the launchpad for the College’s multi-faceted digital fluency initiative, Digital Hamilton, and transform the academic experience on College Hill.

AI theatre

AI as Storyteller

The existential themes of love, death, and time were explored in the AI-scripted and human-performed musical production Channelers, an interdisciplinary art project funded by the Dietrich Inchworm Grant and headed by Assistant Professor of Digital Arts Anna Huff.

Help us provide an accessible education, offer innovative resources and programs, and foster intellectual exploration.

Site Search