Skip to content ↓

Materials Day talks examine the promises and challenges of AI and machine learning

The ability to predict and make new materials faster highlights the need for safety, reliability, and accurate data.
Press Inquiries

Press Contact:

Denis Paiste
Phone: 603-479-5600
Materials Research Laboratory
Close
Eight distinguished researchers spoke at the 2019 Materials Day Symposium. Pictured here (l-r) are MIT professors Carl Thompson, Asu Ozdaglar, Elsa Olivetti, and Ju Li.
Caption:
Eight distinguished researchers spoke at the 2019 Materials Day Symposium. Pictured here (l-r) are MIT professors Carl Thompson, Asu Ozdaglar, Elsa Olivetti, and Ju Li.
Credits:
Image: Denis Paiste/Materials Research Laboratory
Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering and professor of materials science and engineering, describes a chemical synthesis system that combines artificial intelligence-guided processing steps with a robotically operated modular reaction system.
Caption:
Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering and professor of materials science and engineering, describes a chemical synthesis system that combines artificial intelligence-guided processing steps with a robotically operated modular reaction system.
Credits:
Image: Denis Paiste/Materials Research Laboratory
Professor Asu Ozdaglar, who heads MIT’s Department of Electrical Engineering & Computer Science, speaks about robustness in machine learning.
Caption:
Professor Asu Ozdaglar, who heads MIT’s Department of Electrical Engineering & Computer Science, speaks about robustness in machine learning.
Credits:
Image: Denis Paiste/Materials Research Laboratory
Materials Research Laboratory Director Carl Thompson gives welcoming remarks at the Materials Day Symposium.
Caption:
Materials Research Laboratory Director Carl Thompson gives welcoming remarks at the Materials Day Symposium.
Credits:
Image: Denis Paiste/Materials Research Laboratory
Rafael Gomez-Bombarelli, the Toyota Assistant Professor in Materials Processing, explains his new graph-based approach that gives insight into zeolite structure framework transformations.
Caption:
Rafael Gomez-Bombarelli, the Toyota Assistant Professor in Materials Processing, explains his new graph-based approach that gives insight into zeolite structure framework transformations.
Credits:
Image: Denis Paiste/Materials Research Laboratory
Elsa Olivetti, the Atlantic Richfield Associate Professor of Energy Studies, addresses the Materials Day Symposium. Olivetti uses natural language processing to mine millions of materials science journal articles to uncover recipes for making different inorganic materials.
Caption:
Elsa Olivetti, the Atlantic Richfield Associate Professor of Energy Studies, addresses the Materials Day Symposium. Olivetti uses natural language processing to mine millions of materials science journal articles to uncover recipes for making different inorganic materials.
Credits:
Image: Denis Paiste/Materials Research Laboratory
Juejun "JJ" Hu, associate professor of materials science and engineering, details his research coupling a silicon chip-based spectrometer for detecting infrared light wavelengths to a newly created machine learning algorithm.
Caption:
Juejun "JJ" Hu, associate professor of materials science and engineering, details his research coupling a silicon chip-based spectrometer for detecting infrared light wavelengths to a newly created machine learning algorithm.
Credits:
Image: Denis Paiste/Materials Research Laboratory
Materials Research Laboratory Associate Director Mark Beals introduces keynote speaker Brian Storey at the Materials Day Symposium.
Caption:
Materials Research Laboratory Associate Director Mark Beals introduces keynote speaker Brian Storey at the Materials Day Symposium.
Credits:
Image: Denis Paiste/Materials Research Laboratory

The promises and challenges of artificial intelligence and machine learning highlighted the Oct. 9 MIT Materials Day Symposium, with presentations on new ways of forming zeolite compounds, faster drug synthesis, advanced optical devices, and more.

“Machine learning is having an impact in all areas of materials research,” Materials Research Laboratory Director Carl V. Thompson said.

“We’re increasingly able to work in tandem with machines to help us decide what materials to make,” said Elsa A. Olivetti, the Atlantic Richfield Associate Professor of Energy Studies. Machine learning is also guiding how to make those materials with new insights into synthesis methods, and, in some cases (such as with robotic systems), actually making those materials, she noted.

Keynote speaker Brian Storey, director of accelerated materials design and discovery at Toyota Research Institute, spoke about machine learning to advance the switch from the internal combustion engine to electric vehicles, and Professor Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering, spoke about atomic engineering using elastic strain and radiation nudging of atoms.

Porous materials

Olivetti and Rafael Gomez-Bombarelli, the Toyota Assistant Professor in Materials Processing, worked together to apply machine learning to develop a better understanding of porous materials called zeolites, formed from silicon and aluminum oxide, that have a wide range of uses, from cat litter to petroleum refining.

“Essentially, the idea is that the pore has the right size to hold organic molecules,” Gomez-Bombarelli said. While only about 250 zeolites of this class are known to engineers, physicists can calculate hundreds of thousands of possible ways these structures can form. “Some of them can be converted into each other,” he said. “So, you could mine one zeolite, put it under pressure, or heat it up, and it becomes a different one that could be more valuable for a specific application.”

A traditional method was to interpret these crystalline structures as a combination of building blocks. However, when zeolite transformations were analyzed, more than half the time there were no building blocks in common between the original zeolite before the change and the new zeolite after the change. “Building block theory has some interesting ingredients, but doesn’t quite explain the rules to go from A to B,” Gomez-Bombarelli said.

Graph-based approach

Gomez-Bombarelli’s new graph-based approach finds that when each zeolite framework structure is represented as a graph, these graphs match before and after in zeolite transformation pairs. “Some classes of transformations only happen between zeolites that have the same graph,” he said.

This work evolved from Olivetti’s data mining of 2.5 million materials science journal articles to uncover recipes for making different inorganic materials. The zeolite study examined 70,000 papers. “One of the challenges in learning from the literature is we publish positive examples, we publish data of things that went well,” Olivetti said. In the zeolite community, researchers also publish what doesn’t work. “That’s a valuable dataset for us to learn from,” she said. “What we’ve been able to use this dataset for is to try to predict potential synthesis pathways for making particular types of zeolites.”

In earlier work with colleagues at the University of Massachusetts, Olivetti developed a system that identified common scientific words and techniques found in sentences across this large library and brought together similar findings. “One important challenge in natural language processing is to draw this linked information across a document,” Olivetti explained. “We are trying to build tools that are able to do that linking,” Olivetti says.

AI-assisted chemical synthesis

Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering and Professor of Materials Science and Engineering, described a chemical synthesis system that combines artificial intelligence-guided processing steps with a robotically operated modular reaction system.

For those unfamiliar with synthesis, Jensen explained that “You have reactants you start with, you have reagents that you have to add, catalysts and so forth to make the reaction go, you have intermediates, and ultimately you end up with your product.”

The artificial intelligence system combed 12.5 million reactions, creating a set of rules, or library, from about 160,000 of the most commonly used synthesis recipes, Jensen relates. This machine learning approach suggests processing conditions such as what catalysts, solvents, and reagents to use in the reaction.

“You can have the system take whatever information it got from the published literature about conditions and so on and you can use that to form a recipe,” he says. Because there is not enough data yet to inform the system, a chemical expert still needs to step in to specify concentrations, flow rates, and process stack configurations, and to ensure safety before sending the recipe to the robotic system.

The researchers demonstrated this system by predicting synthesis plans for 15 drugs or drug-like molecules — the painkiller lidocaine, for example, and several high blood pressure drugs — and then making them with the system. The flow reactor system contrasts with a batch system. “In order to be able to accelerate the reactions, we use typically much more aggressive conditions than are done in batch — high temperatures and higher pressures,” Jensen says.

The modular system consists of a processing tower with interchangeable reaction modules and a set of different reagents, which are connected together by the robot for each synthesis. These findings were reported in Science.

Former PhD students Connor W. Coley and Dale A. Thomas built the computer-aided synthesis planner and the flow reactor system, respectively, and former postdoc Justin A. M. Lummiss did the chemistry along with a large team of MIT Undergraduate Research Opportunity Program students, PhD students, and postdocs. Jensen also notes contributions from MIT faculty colleagues Regina Barzilay, William H. Green, A. John Hart, Tommi Jaakkola, and Tim Jamison. MIT has filed a patent for the robotic handling of fluid connections. The software suite that suggests and prioritizes possible synthesis routes is open source, and an online version is at the ASKCOS website.

Robustness in machine learning

Deep learning systems perform amazingly well on benchmark tasks such as images and natural language processing applications, said Professor Asu Ozdaglar, who heads MIT’s Department of Electrical Engineering and Computer Science. Still, researchers are far from understanding why these deep learning systems work, when they will work, and how they generalize. And when they get things wrong, they can go completely awry.

Ozdaglar gave an example of an image with a state-of-the-art classifier that can look at a picture of a cute pig and recognize the image as that of a pig. But, “If you add a little bit of, very little, perturbation, what happens is basically the same classifier thinks that’s an airliner,” Ozdaglar said. “So this is sort of an example where people say machine learning is so powerful, it can make pigs fly,” she said, accompanied by audience laughter. “And this immediately tells us basically we have to go beyond our standard approaches.”

A potential solution lies in an optimization formulation known as a Minimax, or MinMax, problem. Another place where MinMax formulation arises is in generative adversarial network, or GAN, training. Using an example of images of real cars and fake images of cars, Ozdaglar explained, “We would like these fake images to be drawn from the same distribution as the training set, and this is achieved using two neural networks competing with each other, a generator network and a discriminator network. The generator network creates from random noise these fake images that the discriminator network tries to pull apart to see whether this is real or fake.”

“It’s basically another MinMax problem whereby the generator is trying to minimize the distance between these two distributions, fake and real. And then the discriminator is trying to maximize that,” she said. The MinMax problem approach has become the backbone of robust training of deep learning systems, she noted.

Ozdaglar added that EECS faculty are applying machine learning to new areas, including health care, citing the work of Regina Barzilay in detecting breast cancer and David Sontag in using electronic medical records for medical diagnosis and treatment.

The EECS undergraduate machine learning course (6.036) hosted 800 students last spring, and consistently has 600 or more students enrolled, making it the most popular course at MIT. The new Stephen A. Schwarzman College of Computing provides an opportunity to create a more dynamic and adaptable structure than MIT’s traditional department structure. For example, one idea is to create several cross-departmental teaching groups. “We envision things like courses in the foundations of computing, computational science and engineering, social studies of computing, and have these courses taken by all of our students taught jointly by our faculty across MIT,” she said.

Optical advantage

Juejun "JJ" Hu, associate professor of materials science and engineering, detailed his research coupling a silicon chip-based spectrometer for detecting infrared light wavelengths to a newly created machine learning algorithm. Ordinary spectrometers, going back to Isaac Newton’s first prism, work by splitting light, which reduces intensity, but Hu’s version collects all of the light at a single detector, which preserves light intensity but then poses the problem of identifying different wavelengths from a single capture.

“If you want to solve this trade-off between the (spectral) resolution and the signal-to-noise ratio, what you have to do is resort to a new type of spectroscopy tool called wavelength multiplexing spectrometer,” Hu said. His new spectrometer architecture, which is called digital Fourier transform spectroscopy, incorporates tunable optical switches on a silicon chip. The device works by measuring the intensity of light at different optical switch settings and comparing the results. “What you have is essentially a group of linear equations that gives you some linear combination of the light intensity at different wavelengths in the form of a detector reading,” he said.

A prototype device with six switches supports a total of 64 unique optical states, which can provide 64 independent readings. “The advantage of this new device architecture is that the performance doubles every time you add a new switch,” he said. Working with Brando Miranda at the Center for Brains Minds and Machines at MIT, he developed a new algorithm, Elastic D1, that gives a resolution down to 0.2 nanometers and gives an accurate light measurement with only two consecutive measurements.

“We believe this kind of unique combination between the hardware of a new spectrometer architecture and the algorithm can enable a wide range of applications ranging from industrial process monitoring to medical imaging,” Hu said. Hu also is applying machine learning in his work on complex optical media such as metasurfaces, which are new optical devices featuring an array of specially designed optical antennas that add a phase delay to the incoming light.

Poster session winners

Nineteen MIT postdocs and graduate students gave two-minute talks about their research during a poster session preview. At the Materials Day Poster Session immediately following the symposium, award winners were mechanical engineering graduate student Erin Looney, media arts and sciences graduate student Bianca Datta, and materials science and engineering postdoc Michael Chon.

The Materials Research Laboratory serves interdisciplinary groups of faculty, staff, and students, supported by industry, foundations, and government agencies to carry out fundamental engineering research on materials. Research topics include energy conversion and storage, quantum materials, spintronics, photonics, metals, integrated microsystems, materials sustainability, solid-state ionics, complex oxide electronic properties, biogels, and functional fibers.

Related Links

Related Topics

Related Articles

More MIT News

Gene Keselman headshot

Faces of MIT: Gene Keselman

At MIT, Keselman is a lecturer, executive director, managing director, and innovator. Additionally, he is a colonel in the Air Force Reserves, board director, and startup leader.

Read full story