May 8, 2025

CGU’s 2025 Keynote Speaker Kate Darling Asks the Questions AI Can’t Answer

Dr. Kate Darling smiling against a dark wooden background, wearing a brown leather top.

As Claremont Graduate University prepares to celebrate its commencement on May 17, the theme of Harnessing AI for Transformative Change and Justice sets the stage for a conversation that is both urgent and intriguing.

Artificial intelligence is no longer some distant concept. It’s here, embedded in everything from chatbots to autonomous vehicles, and it’s reshaping our world. But how do we ensure that AI isn’t just a tool for innovation, but a force for good?

The questions surrounding AI’s potential for social justice and human well-being are important, and no one is better equipped to guide us through that conversation than Dr. Kate Darling, who will join CGU as this year’s keynote speaker and recipient of an Honorary Doctorate of Science.

A leading voice in the ethics of robotics and artificial intelligence, Darling brings a refreshingly human perspective to a field often dominated by engineering and efficiency.

At the MIT Media Lab, where she serves as a research scientist, and at the Boston Dynamics AI Institute, where she leads the Ethics & Society initiative, Darling has built a career asking the tough questions technologists might overlook.

What does it mean when humans form emotional bonds with robots? How do legal and political systems shape the way AI enters our lives? And most pressingly: Who benefits — and who might be harmed — when machines become social actors?

“I’ve always been interested in how systems shape human behavior,” Darling explains. “That’s why I originally went to law school. I was interested in the law as a system that can shape human behavior. At the same time, I found myself surrounded by a bunch of cool robotics, and I was talking to all the robotics students and realized that robotics and these technologies are also systems that can shape human behavior.”

That launched her into a field that was just beginning to take form. Long before robot pets and AI boyfriends were trending on social media, Darling was researching how people project emotions and agency onto machines. Her work revealed something fundamental: We’re wired to anthropomorphize. Whether it’s a robot dinosaur or a chat interface, if it mimics our behavior, especially through language, we respond emotionally.

“I did not predict this shift — I thought it would take a lot longer for us to have convincing language,” she says. “The technology has really changed in the past few years with the explosion of these large language models and AI suddenly being able to have real conversations with people that feel authentic. People will now automatically project agency onto AI systems and treat them like they’re alive, even though they know they’re interacting with AI.”

That emotional connection isn’t inherently dangerous, but it is powerful. And like all powerful tools, it depends on context. For Darling, that context includes economic incentives, political structures, and corporate motivations.

“If a company builds an AI that’s acting in the company’s interest instead of the users’, that can be harmful,” she says. “But if it’s truly trying to benefit the user, maybe it’s fine. It all depends on the environment in which these systems are deployed.”

Darling’s training in law and economics gives her a distinct vantage point. Her doctoral research at ETH Zurich explored copyright transfers and the incentives behind content creation. She’s taught robot ethics at Harvard Law alongside legal scholar Lawrence Lessig and served as an advisor on intellectual property at MIT. But what ties her work together isn’t just an academic curiosity, but rather a commitment social justice and foresight in how new technologies reshape our world.

“The thing we need to be most mindful of is that it’s not just the technology that determines the impact — it’s the social systems we deploy it into,” Darling says. “If you take any kind of technology and you deploy it in a political economy of unbridled corporate capitalism, then the technology is going to be used to exploit people and treat them like replaceable commodities. Whereas, if you’re introducing it into a system where the focus is on supporting people or human workers, you’re going to have a very different effect.”

This systemic lens informs her current work at the Robotics and AI Institute, where she’s assembled a team of social scientists to study everything from robotics’ effect on labor markets to its role in city infrastructure. It’s a unique combination of research and practice, and one that highlights the theme CGU has chosen to celebrate: transformation through justice.

“We’re really just getting started, but it’s been interesting having a team of social scientists embedded in an institute that’s working at the cutting edge and trying to integrate AI work into robots, because we get to see what’s working, what’s not working and get to foreshadow a little bit where this is all going.

“I think that so long as we are leaning into the potential of the technology to provide something that we didn’t already have previously or to supplement human ability or to support human flourishing, that there’s tremendous potential in these systems.”

At a time when AI seems to be everywhere — writing essays, diagnosing diseases, even talking back from your phone — Darling urges us to slow down and ask better questions. Not just “What can AI do?” but “Who does it serve?” and “What kind of future do we want to build with it?