The broad applicability of artificial intelligence in today’s society necessitates the need to develop and deploy technologies that can build trust in emerging areas, counter asymmetric threats, and adapt to the ever-changing needs of complex environments.
As part of a new collaboration to advance and support AI research, the MIT Stephen A. Schwarzman College of Computing and the Defense Science and Technology Agency in Singapore are awarding funding to 13 projects led by researchers within the college that target one or more of the following themes: trustworthy AI, enhancing human cognition in complex environments, and AI for everyone. The 13 research projects selected are highlighted below.
“SYNTHBOX: Establishing Real-World Model Robustness and Explainability Using Synthetic Environments” by Aleksander Madry, professor of computer science. Emerging machine learning technology has the potential to significantly help with and even fully automate many tasks that have confidently been entrusted only to humans so far. Leveraging recent advances in realistic graphics rendering, data modeling, and inference, Madry’s team is building a radically new toolbox to fuel streamlined development and deployment of trustworthy machine learning solutions.
“Next-Generation NLP Technologies for Low-Resource Tasks” by Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science. In natural language technologies, most languages in the world are not richly annotated. This lack of direct supervision often results in inaccurate, indefensible, and brittle outputs. In a project led by Barzilay and Jaakkola, researchers are developing new text-generation tools for controlled style transfer and novel algorithms for detecting misinformation or suspicious news online.
“Computationally-Supported Role-playing for Social Perspective Taking” by D. Fox Harrell, professor of digital media and artificial intelligences. Drawing on computer science and social science approaches, this project aims to create tools, techniques, and methods to model social phenomena for users of computer-supported role-playing systems — online gaming, augmented reality, and virtual reality — to better understand the perspectives of others with different social identities.
“Improving Situational Awareness for Collaborative Human-Machine First Responder Teams” by Nick Roy, professor of aeronautics and astronautics. When responding to emergencies in urban environments, achieving situational awareness is essential. In a project led by Roy, researchers are developing a multi-agent system that encompasses a team of autonomous air and ground vehicles designed to arrive at the scene of an emergency, a map of the scene to provide a situation report to the first responders in advance, and the ability to search for people and entities of interest.
“New Representations for Vision” by William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science; and Josh Tenenbaum, professor of cognitive science and computation. An unrealized goal of AI is to model the rich and complicated shapes and textures of real-world scenes depicted in an image. This project will focus on developing neural network representations for images which are better suited to the requirements for image representations in vision and graphics to represent a 3D world efficiently, capturing its richness.
“Data-driven Optimization Under Categorical Uncertainty, and Applications to Smart City Operations” by Alexandre Jacquillat, assistant professor of operations research and statistics. Smart city technologies can help aid major metropolitan areas that are facing increasing pressure to manage congestion, cut greenhouse gas emissions, improve public safety, and enhance health-care delivery. In a project led by Jacquillat, researchers are working on new AI tools to help manage the cyber-physical infrastructure in smart cities and on the development and deployment of automated decision tools for smart city operations.
“Provably Robust Reinforcement Learning” by Ankur Moitra, the Rockwell International Career Development Associate Professor of Applied Mathematics. Moitra and his team are building on their new framework for robust supervised learning to explore more complex learning problems, including the design of robust algorithms for reinforcement learning in Massart noise models, a space that has yet to be fully explored.
“Audio Forensics” by James Glass, senior research scientist. The ongoing improvements in capabilities that manipulate or generate multi-media content such as speech, images, and video are resulting in ever-more natural and realistic “deepfake” content that is increasingly difficult to discern from the real thing. In a project led by Glass, researchers are developing a set of deep learning models that can be used to identify manipulated or synthetic speech content, as well as detect the nature of deepfakes to help analysts better understand the underlying objective of the manipulation and how much effort is required to create the fake content.
“Building Dependable Autonomous Systems through Learning Certified Decisions and Control” by Chuchu Fan, assistant professor of aeronautics and astronautics. Machine learning creates unprecedented opportunities for achieving full autonomy, but learning-based methods in autonomous systems can and do fail, due to poor-quality data, modeling errors, the coupling with other agents, and the complex interaction with human and computer systems in modern operational environments. Fan and her research group are building a framework consisting of algorithms, theories, and software tools for learning certified planner and control, as well as developing firmware platforms for the automatic plug-and-play design of quadrotors and the formation control of mixed ground and aerial vehicles.
“Online Learning and Decision-making Under Uncertainty in Complex Environments” by Patrick Jaillet, the Dugald C. Jackson Professor of Electrical Engineering and Computer Science. Technical advances in computing, telecommunication, sensing capabilities, and other information technologies provide tremendous opportunities to use dynamic information in order to enhance productivity, optimize performance, and solve new complex online problems of great practical interests. However, many of these opportunities bring significant methodological challenges on how to formulate and solve these new problems. In a project led by Jaillet, researchers are using machine learning techniques to systematically integrate online optimization and online learning in order to help human decision-making under uncertainty.
“Analytics-Guided Communication to Counteract Filter Bubbles and Echo Chambers” by Deb Roy, professor of media arts and sciences. Social media technologies that promised to open up our worlds have instead driven us algorithmically into cocoons of homogeneity. Roy and his team are developing language models and methods to counteract the effects of these technologies that has exacerbated socioeconomic divides and limited exposure to different perspectives, curbing opportunities for users to learn from others who may not necessarily look, think, or live like them.
“Decentralized Learning with Diverse Data” by Costis Daskalakis, professor of electrical engineering and computer science; Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, department head of electrical engineering and computer science, and deputy dean of academics for MIT Schwarzman College of Computing; and Russ Tedrake, Toyota Professor of Electrical Engineering and Computer Science. In many AI settings, it is important to combine diverse experiences of, and decentralized data collected by, heterogeneous agents in order to develop better models for predictions and decision-making in the various different new tasks these agents are performing. Bringing tools from machine learning, optimization, control, statistics, statistical physics, and game theory, this project aims to advance the fundamental science of federated or fleet learning – learning from decentralized agents with diverse data — using robotics as an application area to provide a rich and relevant source of data.
“Trustworthy, Deployable 3D Scene Perception via Neuro-symbolic Probabilistic Programs” by Vikash Mansinghka, principal research scientist; Joshua Tenenbaum, professor of cognitive science and computation; and Antonio Torralba, Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science. To be deployable in the real world, 3D scene perception systems need to generalize across environments and sensor configurations, and adapt to scene and environment changes, without costly re-training or fine-tuning. Building on the researchers’ breakthroughs in probabilistic programming and in real-time neural Monte Carlo inference for symbolic generative models, the project team is developing a domain-general approach to trustworthy, deployable 3D scene perception that addresses fundamental limitations of state-of-the-art deep learning systems.