Skip to content ↓

3 Questions: Thomas Malone and Daniela Rus on how AI will change work

MIT Task Force on the Work of the Future releases research brief "Artificial Intelligence and the Future of Work."
Side-by-side portrait photos of Thomas Malone and Daniela Rus
Caption:
Thomas Malone (left) is director of the MIT Center for Collective Intelligence and the Patrick J. McGovern Professor of Management. Daniela Rus is director of the Computer Science and Artificial Intelligence Laboratory, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and a member of the MIT Task Force on the Work of the Future. Not pictured: Robert Laubacher, associate director of the MIT Center for Collective Intelligence.

As part of the MIT Task Force on the Work of the Future’s series of research briefs, Professor Thomas Malone, Professor Daniela Rus, and Robert Laubacher collaborated on "Artificial Intelligence and the Future of Work," a brief that provides a comprehensive overview of AI today and what lies at the AI frontier. 

The authors delve into the question of how work will change with AI and provide policy prescriptions that speak to different parts of society. Thomas Malone is director of the MIT Center for Collective Intelligence and the Patrick J. McGovern Professor of Management in the MIT Sloan School of Management. Daniela Rus is director of the Computer Science and Artificial Intelligence Laboratory, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and a member of the MIT Task Force on the Work of the Future. Robert Laubacher is associate director of the MIT Center for Collective Intelligence.

Here, Malone and Rus provide an overview of their research.


Q: You argue in your brief that despite major recent advances, artificial intelligence is nowhere close to matching the breadth and depth of perception, reasoning, communication, and creativity of people. Could you explain some of the limitations of AI?

Rus: Despite recent and significant strides in the AI field, and great promise for the future, today’s AI systems are still quite limited in their ability to reason, make decisions, interact reliably with people and the physical world. Some of today’s greatest successes are due to a machine learning method called deep learning. These systems are trained using vast amounts of data that needs to be manually labeled. Their performance is dependent on the quantity and quality of data used to train them. The larger the training set for the network, the better its performance, and, in turn, the better the product that relies on the machine learning engine. But training large models has high computation cost. Also, bad training data leads to bad performance: when the data has bias, the system response propagates the same bias.

Another limitation of current AI systems is robustness. Current state-of-the-art classifiers achieve impressive performance on benchmarks, but their predictions tend to be brittle. Specifically, inputs that were initially classified correctly can become misclassified once a carefully constructed but indiscernible perturbation is added to them. An important consequence of the lack of robustness is the lack of trust. One of the worrisome factors about the use of AI is the lack of guarantee that an input will be processed and classified correctly. The complex nature of training and using neural networks leads to systems that are difficult for people to understand. The systems are not able to provide explanations for how they reached decisions.

Q: What are the ways AI is complementing, or could complement, human work?

Malone: Today’s AI programs have only specialized intelligence; they’re only capable of doing certain specialized tasks. But humans have a kind of general intelligence that lets them do a much broader range of things.

That means some of the best ways for AI systems to complement human work is to do specialized tasks that computers can do better, faster, or more cheaply than people can. For example, AI systems can be helpful by doing tasks such as interpreting medical X-rays, evaluating the risk of fraud in a credit card charge, or generating unusual new product designs.

And humans can use their social skills, common sense, and other kinds of general intelligence to do things computers can’t do well. For instance, people can provide emotional support to patients diagnosed with cancer. They can decide when to believe customer explanations for unusual credit card transactions, and they can reject new product designs that customers would probably never want.

In other words, many of the most important uses of computers in the future won’t be replacing people; they’ll be working with people in human-computer groups — “superminds” — that can do things better than either people or computers alone could do.

The possibilities here go far beyond what people usually think of when they hear a phrase like “humans in the loop,” Instead of AI technologies just being tools to augment individual humans, we believe that many of their most important uses will occur in the context of groups of humans — often connected by the internet. So we should move from thinking about humans in the loop to computers in the group.

Q: What are some of your recommendations for education, business, and government regarding policies to help smooth the transition of AI technology adoption? 

Rus: In our report, we highlight four types of actions that can reduce the pain associated with job transitions: education and training, matching jobs to job seekers, creating new jobs, and providing counseling and financial support to people as they transition from old to new jobs. Importantly, we will need partnership among a broad range of institutions to get this work done.

Malone: We expect that — as with all previous labor-saving technologies — AI will eventually lead to the creation of more new jobs than it eliminates. But we see many opportunities for different parts of society to help smooth this transition, especially for the individuals whose old jobs are disrupted and who cannot easily find new ones.

For example, we believe that businesses should focus on applying AI in ways that don’t just replace people but that create new jobs by providing novel kinds of products and services. We recommend that all schools include computer literacy and computational thinking in their curricula, and we believe that community colleges should offer more reskilling and online micro-degree programs, often including apprenticeships at local employers.

We think that current worker organizations (such as labor unions and professional associations) or new ones (perhaps called “guilds”) should expand their roles to provide benefits previously tied to formal employment (such as insurance and pensions, career development, social connections, a sense of identity, and income security).

And we believe that governments should increase their investments in education and reskilling programs to make the American workforce once again the best-educated in the world. And they should reshape the legal and regulatory framework that governs work to encourage creating more new jobs.

Related Links

Related Topics

Related Articles

More MIT News

Gene Keselman headshot

Faces of MIT: Gene Keselman

At MIT, Keselman is a lecturer, executive director, managing director, and innovator. Additionally, he is a colonel in the Air Force Reserves, board director, and startup leader.

Read full story