Skip to content ↓

Student group explores the ethical dimensions of artificial intelligence

MIT AI Ethics Reading Group was founded by students who saw firsthand how technology developed with good intentions could be problematic.
Press Inquiries

Press Contact:

Kim Martineau
Phone: 617-710-5216
MIT Quest for Intelligence
Close
At a recent meeting, the MIT AI Ethics Reading Group debated options for teaching ethics in a traditional computer science curriculum. Postdoc Abby Jaques (left) is developing ethics modules that can be incorporated into existing coursework.
Caption:
At a recent meeting, the MIT AI Ethics Reading Group debated options for teaching ethics in a traditional computer science curriculum. Postdoc Abby Jaques (left) is developing ethics modules that can be incorporated into existing coursework.
Credits:
Photo: Kim Martineau

For years, the tech industry followed a move-fast-and-break-things approach, and few people seemed to mind as a wave of astonishing new tools for communicating and navigating the world appeared on the market.

Now, amid rising concerns about the spread of fake news, the misuse of personal data, and the potential for machine-learning algorithms to discriminate at scale, people are taking stock of what the industry broke. Into this moment of reckoning come three MIT students, Irene ChenLeilani Gilpin, and Harini Suresh, who are the founders of the new MIT AI Ethics Reading Group.

All three are graduate students in the Department of Electrical Engineering and Computer Science (EECS) who had done stints in Silicon Valley, where they saw firsthand how technology developed with good intentions could go horribly wrong.

“AI is so cool,” said Chen during a chat in Lobby 7 on a recent morning. “It’s so powerful. But sometimes it scares me.” 

The founders had debated the promise and perils of AI in class and among friends, but their push to reach a wider audience came in September, at a Google-sponsored fairness in machine learning workshop in Cambridge. There, an MIT professor floated the idea of an ethics forum and put the three women in touch. 

Then when MIT announced plans last month to create the MIT Stephen A. Schwarzman College of Computing, they launched the MIT AI Ethics Reading Group. Amid the enthusiasm following the Schwarzman announcement, more than 60 people turned up to their first meeting. 

One was Sacha Ghebali, a master’s student at the MIT Sloan School of Management. He had taken a required ethics course in his finance program at MIT and was eager to learn more.

“We’re building tools that have a lot of leverage,” he says. “If you don’t build them properly, you can do a lot of harm. You need to be constantly thinking about ethics.”

On a recent night, Ghebali was among those returning for a second night of discussion. They gathered around a stack of pizza boxes in an empty classroom as Gilpin kicked off the meeting by recapping the fatal crash last spring in which a self-driving Uber struck a pedestrian. Who should be liable, Gilpin asked, the engineer who programmed the car or the person behind the wheel?

A lively debate followed. The students then broke into small groups as the conversation shifted to how ethics should be taught: either as a stand-alone course, or integrated throughout the curriculum. They considered two models: Harvard, which embeds philosophy and moral reasoning into its computer science classes, and Santa Clara University, in Silicon Valley, which offers a case study-based module on ethics within its introductory data science courses. 

Reactions in the room were mixed.

“It’s hard to teach ethics in a CS class so maybe there should be separate classes,” one student offered. Others thought ethics should be integrated at each level of technical training. 

“When you learn to code, you learn a design process,” said Natalie Lao, an EECS graduate student helping to develop AI courses for K-12 students. “If you include ethics into your design practice you learn to internalize ethical programming as part of your work flow.”

The students also debated whether stakeholders beyond the end-user should be considered. “I was never taught when I’m building something to talk to all the people it will effect,” Suresh told the group. “That could be really useful.”

How the Institute should teach ethics in the MIT Schwarzman College of Computing era remains unclear, says Abelson, the Class of 1922 Professor of Computer Science and Electrical Engineering who helped start the group and was at both meetings. “This is really just the beginning,” he says. “Five years ago, we weren’t even talking about people shutting down the steering wheel of your car.”

As AI continues to evolve, questions of safety and fairness will remain a foremost concern. In their research at MIT, the founders of the ethics reading group are simultaneously developing tools to address the dilemmas raised in the group. 

Gilpin is creating the methodologies and tools to help self-driving cars and other autonomous machines explain themselves. For these machines to be truly safe and widely trusted, she says, they need to be able to interpret their actions and learn from their mistakes. 

Suresh is developing algorithms that make it easier for people to use data responsibly. In a summer internship with Google, she looked at how algorithms trained on Google News and other text-based datasets pick up on certain features to learn biased associations. Identifying sources of bias in the data pipeline, she says, is key to avoiding more serious problems in downstream applications. 

Chen, formerly a data scientist and chief of staff at DropBox, develops machine learning tools for health care. In a new paper, Why Is My Classifier Discriminatory, she argues that the fairness of AI predictions should be measured and corrected by collecting more data, not just by tweaking the model. She presents her paper next month at the world’s largest machine-learning conference, Neural Information Processing Systems.

“So many of the problems at Dropbox, and now in my research at MIT, are completely new,” she says. “There isn't a playbook. Part of the fun and challenge of working on AI is that you're making it up as you go.”

The AI-ethics group holds its last two meeting of the semester on Nov. 28 and Dec. 12.

Related Links

Related Topics

Related Articles

More MIT News

Gene Keselman headshot

Faces of MIT: Gene Keselman

At MIT, Keselman is a lecturer, executive director, managing director, and innovator. Additionally, he is a colonel in the Air Force Reserves, board director, and startup leader.

Read full story