Riding the AI Wave
Kris Hammond helps us navigate the new world of human-AI interaction. By Amanda Morris
If you ask a chatbot how old George Clooney is, it might tell you he’s still 36. If you ask who the mayor of Chicago is, it might say Richard M. Daley is still in charge.
These answers are obviously incorrect. But in the world of artificial intelligence (AI), chatbots spray information like a firehose of unchecked facts. Sometimes, they hit the target. Other times, they soak everyone with a torrent of carelessly inaccurate assertions.
Kris Hammond doesn’t find these digital hiccups frustrating, though. He finds them exciting. He takes screenshots of wildly inaccurate chatbot answers and adds them to his catalog of algorithmic misfires.
“Systems such as ChatGPT and other AI chatbots aren’t looking for the right answer; they are looking for the most likely answer,” says Hammond, a Northwestern computer scientist and AI pioneer. “At the beginning of the current AI wave, if you asked for George Clooney’s age, AI might say he was 36 because that’s how old he was when he was most famous.” And for nearly a quarter century, Daley ’08 H led Chicago as mayor, so it’s easy to understand how AI might make mistakes.
“Chatbots have greatly improved over time — because they started looking things up on Wikipedia,” Hammond says. “Still, that relies on Wikipedia having the correct information, and information can change quickly.”
Since joining Northwestern in 1998, Hammond, the Bill and Cathy Osborn Professor of Computer Science in the McCormick School of Engineering, has dedicated his career to studying and developing AI tools. He approaches artificial intelligence with cautious optimism that it can be our partner — not replacement — in a new information age.
AI has come a long way since Hammond first built his own computer science lab at the University of Chicago in the late 1980s. The field grew incrementally through the ’90s. Then — after IBM’s supercomputer Deep Blue famously defeated Russian chess grandmaster Garry Kasparov during a hard-fought match in 1997 — interest in intelligent systems hit an inflection point. The late 2000s saw the rise of machine learning and deep learning.
In 2010, one year before Apple’s AI assistant Siri, 12 years before OpenAI’s ChatGPT and 13 years before Google’s Gemini, Hammond and fellow Northwestern computer scientist and professor Larry Birnbaum launched the AI startup Narrative Science, which turned structured data into natural-language insights. Narrative Science injected a synthetic soul into AI-generated articles, producing data-driven pieces that sounded deceptively human. Salesforce purchased Narrative Science in 2021 and incorporated its technology into Tableau, Salesforce’s business intelligence and data visualization platform.
Today, Hammond’s experience as an AI innovator informs his leadership of Northwestern’s efforts to improve AI for all.
Although Hammond says he barely remembers his life before computers and coding, there was indeed a time when his world was much more analog. Hammond grew up on the East Coast and spent his high school years in Salt Lake City, where his mother was a social worker and his father was a professor of archaeology at the University of Utah. Over the course of 50 years, Philip C. Hammond excavated several sites in the Middle East and made dozens of trips to Jordan, earning him the nickname Lion of Petra. Kris joined these expeditions for three summers, working as his father’s surveyor and draftsman.
“Now, once a week, I ask ChatGPT for a biography of my father, as an experiment,” Hammond says, bemused. “Sometimes, it gives me a beautifully inaccurate bio that makes him sound like Indiana Jones. Other times, it says he is a tech entrepreneur and that I have followed in his footsteps.”
While those biographical tidbits are more AI-generated falsehoods, Hammond and his father have both traced intelligence from different worlds — one etched in stone and another in silicon. Wanting a deeper understanding of the meaning of intelligence and thought, Hammond studied philosophy as an undergraduate at Yale University and planned to go law school after graduation. But his trail diverged when a fellow member of a local sci-fi club suggested that Hammond, who had taken one computer science class, try working as a programmer.
“After nine months as a programmer, I decided that’s what I wanted to do for a living,” Hammond says.
That sci-fi club guy was Chris Riesbeck, who is also now a professor of computer science at McCormick. Hammond earned his doctorate in computer science from Yale in 1986. But he didn’t abandon philosophy entirely. Instead, he applied those abstract frameworks — consciousness, knowledge, creativity, logic and the nature of reason — to the pursuit of intelligent systems.
“The structure of thought always fascinated me,” Hammond says. “Looking at it from the perspective of how humans think and how machines ‘think’ — and how we can ‘think’ together — became a driver for me.”
But the word “think” is tenuous in this context, he says. There’s a fundamental and important distinction between true human cognition and what current AI can do — namely, sophisticated mimicry. AI isn’t trying to critically assess data to devise correct answers, says Hammond. Instead, it’s a probabilistic engine, sifting through language likelihoods to finish a sentence — like the predictive text you might see on your phone while composing a message. It is seeking the most likely conclusion to any given string of words.
“These are responsive systems,” he says. “They aren’t reasoning. They just hold words together. That’s why they have problems answering questions about recent events.”

Hammond’s philosophy on developing safe and ethical AI relies on a nuanced relationship with the technology. Humans should not take AI-generated answers at face value. Nor should they replace their own thinking with AI. Instead, the ideal lies in an intellectual symbiosis — a partnership between smart friends. Together, human and machine can unlock new levels of productivity and efficiency — a cognitive tag-team for the information age.
One of the dangers of generative AI search engines lies in the ease of receiving answers quickly and effortlessly. Users no longer have to wrestle with ambiguity. Hammond worries this could cause cognitive muscles to atrophy.
“For me, that’s the worst thing imaginable,” he says. “Synthesizing information and critical reasoning skills are crucial.
“I’m not a Luddite. I desperately want intelligent systems,” he adds. “But taking away our ability to reason and think — that’s horrible for humanity.”
Hammond’s recent trip to New Orleans to celebrate his wife’s birthday underscored the importance of reasoning skills. Upon hearing that a partial solar eclipse might be visible in the early morning hours of her birthday, Hammond typed “solar eclipse March 2025” into an AI-powered search engine.
“I got so excited,” he says. “It said the moon would cover 82% of the sun in the afternoon — not the early morning. It was fantastic — and absolutely false. I realized that, if this were true, I should have heard about it on the news. But no one had said a word about it. It was giving me a description of what had happened the year before, simply because there had been more reporting of that event.
“How many times do we get wrong answers and not push deeper? If you don’t ask more questions, you are trapped in a world where you are no longer thinking critically.”
Eager to explore ways users and developers can work together to prevent potential threats from AI, Hammond launched Northwestern’s Center for Advancing Safety of Machine Intelligence (CASMI) in 2022 in partnership with the Underwriters Laboratories (UL) Research Institutes’ Digital Safety Research Institute. One of the first centers of its kind, CASMI aims to establish best practices for evaluating and developing AI that is safe, equitable and beneficial. (CASMI won a 2024 Chicago Innovation Award for its commitment to AI safety.)
“Before we can prevent harm or implement safety measures, we have to understand and articulate what issues exist,” Hammond says. To that end, CASMI has connected researchers across the country to identify those potential harms. “CASMI is funding a dozen projects, which look at the effects of AI on mental health, children, law, medicine and journalism,” says Hammond. Part of this work is aimed at the development of the AI Incident Database, an open-source project that keeps a historical record of AI’s failures and negative effects on users and society.
Beyond the potential erosion of critical thinking skills, Hammond worries about AI’s flaws invisibly seeping into various aspects of society.
Thus far, the collective imagination has fixated on dramatic potentialities — AI stealing human jobs or sentient machines turning rogue. Although autonomous lethal devices do exist — Hammond even provided consultation on the ethical dilemmas of such machines during his time as part of the United Nations Institute for Disarmament Research — AI’s everyday threats are much more subtle. And they aren’t a distant fear in an imagined future. They are already here.
The examples are nearly endless. Physicians have started using speech-to-text software for medical notetaking, but these systems can get words wrong or might guess what’s next based on past context, adding false bits of information or introducing errors into clinical records. Police departments use AI to transcribe audio from bodycam footage to create incident reports — but if the AI is trained on historical incident reports that reflect existing societal biases (such as disproportionately associating certain demographics with criminal activity), that practice can end up amplifying biases. Recently developed chatbots are prone to unsettling “hallucinations.” Unlike simple inaccuracies, these conjured realities are entirely fabricated and nonsensical — and presented as fact. And then there are deepfakes — digitally manipulated images and video, which exacerbate the existential crisis of misinformation and disinformation.
“The promise of AI is alluring,” Hammond says. “The technology might save time, but it also could propagate errors and bias at a speed and scale far beyond human capability. We must approach it with caution.”
As the director of Northwestern’s master of science in AI program, Hammond is already training the next generation of developers to mitigate potential harms. While it can be exciting to play around with AI’s capabilities, he says, developers should focus less on creating “cool tech” and instead focus on developing AI-powered apps and devices that can truly make users’ lives better.
“Many people think bias is the most important problem in AI,” Hammond says. “But it’s impossible to build a system without bias. What’s most important is understanding how the system will be deployed out in the world. If a system is great at answering questions, for example, that may lead users to lose the skill of answering questions on their own.”
To avoid that pitfall, developers can instead build systems that encourage human thought via interactive engagement, prompting users to delve deeper. When thorny, multifaceted issues arise, AI should present various viewpoints.
“We can build systems that deliver multiple perspectives, depending on the question,” Hammond says. Then users can think through nuanced evidence and arrive at their own conclusions.
“As a professor, I would never think my role is simply to stand in a classroom and answer students’ questions,” he adds. “My job is to help students learn how to think, how to solve problems and how to break down challenges into components. Similarly, we should not think about intelligent tools as standalone systems that just answer questions. We should think about them as partners that make us smarter.”
Amanda Morris ’14 MA is senior science and engineering editor in the Office of Global Marketing and Communications.
Kris Hammond’s work supports the University priority of harnessing the power of artificial intelligence.
Reader Responses
Thank you for presenting Professor Kris Hammond’s work on AI. His approach brings hope that AI will advance knowledge rather than overtake it.
A few things stood out for me as I read the article. As he mentioned, it occurred to him — though after the fact — that media would have reported on the partial solar eclipse if it was actually going to take place when he was in New Orleans for his wife’s birthday, reminding him that it’s essential to look further than the first source one comes across. Along the same lines, I’ve found that many sources repeat the same information verbatim. Does that tell us it’s accurate or only that sources cut and paste text from one another? Do the number of sources that supply the same information using the exact same words contribute to AI offering that information simply because it shows up so often? All the more reason to apply our own thinking skills when reviewing material, as he recommends.
I strongly agree that “manipulated images and video … exacerbate the existential crisis of misinformation and disinformation.” Bob Beck, a prolific researcher and inventor at the University of Chicago stated in a paper published in "Advances in Visual Semiotics" in 1995: "’With the technological convergence of the visual and verbal modes of learning/knowing and of communicating, we appear to be on the brink of a cultural revolution of unprecedented proportions — substantially greater than that which accompanied the industrial revolution — with which we must learn to cope and to adapt, as individuals, as a species, and as participants in a global process.’” My colleagues and I at North Central Regional Educational Laboratory (NCREL), which was eventually absorbed into American Institutes of Research (AIR), had several conversations with him about imaging at the time he made this statement. He noted how communication started with images (cave paintings, hieroglyphics), followed by written language, and increasingly relies on images again in our own day and age. He warned that this development had the potential to wreak havoc in society. AI seems to have brought that struggle to the fore.
— Rosemary Caruk '88 MS, Berwyn, Ill. , via Northwestern Magazine
No one has commented on this page yet.
Submit a Response