Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 31 - 45 of 1104 news clips related to this topic.
Show:

Interesting Engineering

Researchers at MIT have developed a new method that “enables robots to intuitively identify relevant areas of a scene based on specific tasks,” reports Baba Tamim for Interesting Engineering. “The tech adopts a distinctive strategy to make robots effective and efficient at sorting a cluttered environment, such as finding a specific brand of mustard on a messy kitchen counter,” explains Tamim. 

New Scientist

Researchers at MIT and elsewhere have found that “human memories can be distorted by photos and videos edited by artificial intelligence,” reports Matthew Sparkes for New Scientist. “I think the worst part here, that we need to be aware or concerned about, is when the user isn’t aware of it,” says postdoctoral fellow Samantha Chan. “We definitely have to be aware and work together with these companies, or have a way to mitigate these effects. Maybe have sort of a structure where users can still control and say ‘I want to remember this as it was’, or at least have a tag that says ‘this was a doctored photo, this was a changed photo, this was not a real one’.”

WHDH 7

Prof. Regina Barzilay has received the WebMD Health Heros award for her work developing a new system that uses AI to detect breast cancer up to 5 years earlier, reports WHDH. “We do have a right to know our risk and then we, together with our healthcare providers, need to manage them,” says Barzilay. 

Forbes

Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota '08 MS '16, MBA '16 explores the challenges, opportunities and future of AI-driven drug development. “I see the opportunities for AI in drug development as vast and transformative,”  writes Hayes-Mota. “AI can help potentially uncover new drug candidates that would have been impossible to find through traditional methods.”

The Washington Post

Writing for The Washington Post, Prof. Daniela Rus, director of CSAIL, and Nico Enriquez, a graduate student at Stanford, make the case that the United States should not only be building more efficient AI software and better computer chips, but also creating “interstate-type corridors to transmit sufficient, reliable power to our data centers.” They emphasize: “The United States has the talent, investor base, corporations and research institutions to write the most advanced AI models. But without a powerful data highway system, our great technology advances will be confined to back roads.”

Bloomberg

Prof. Daron Acemoglu speaks with Bloomberg reporters John Lee and Katia Dmitrieva about the social and economic impacts of AI. “We don’t know where the future lies,” says Acemoglu. “There are many directions, the technology is malleable, we can make different choices.” 

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”

Newsweek

New research by Prof. David Rand and his colleagues has utilized generative AI to address conspiracy theory beliefs, reports Marie Boran for Newsweek. “The researchers had more than 2,000 Americans interact with ChatGPT about a conspiracy theory they believe in, explains Boran. “Within three rounds of conversation with the chatbot, participants’ belief in their chosen conspiracy theory was reduced by 20 percent on average.” 

Fortune

Researchers from MIT and elsewhere have found that LLM-based AI chatbots are more effective at implanting false memories than “other methods of trying to implant memories, such as old-fashioned surveys with leading questions or conversations with a pre-scripted chatbot,” reports Jeremy Kahn for Fortune. “It seems the ability of the generative AI chatbot to shape each question based on the previous answers of the test subjects gave it particular power,” explains Kahn.

Bloomberg

Researchers from MIT and Stanford University have found “staff at one Fortune 500 software firm became 14% more productive on average when using generative AI tools,” reports Olivia Solon and Seth Fiegerman for Bloomberg

Popular Science

A new study by researchers from MIT and elsewhere tested a generative AI chatbot’s ability to debunk conspiracy theories , reports Mack Degeurin for Popular Science. “In the end, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%,” writes Degeurin. 

Los Angeles Times

A new study by researchers from MIT and elsewhere has found that an AI chatbot is capable of combating conspiracy theories, reports Karen Kaplan for The Los Angeles Times. The researchers found that conversations with the chatbot made people “less generally conspiratorial,” says Prof. David Rand.  “It also increased their intentions to do things like ignore or block social media accounts sharing conspiracies, or, you know, argue with people who are espousing those conspiracy theories.”

The New York Times

A new chatbot developed by MIT researchers aimed at persuading individuals to stop believing unfounded conspiracy theories has made “significant and long-lasting progress at changing people’s convictions,” reports Teddy Rosenbluth for The New York Times. The chatbot, dubbed DebunkBot, challenges the “widely held belief that facts and logic cannot combat conspiracy theories.” Professor David Rand explains: “It is the facts and evidence themselves that are really doing the work here.”

Mashable

A new study by Prof. David Rand and his colleagues has found that chatbots, powered by generative AI, can help people abandon conspiracy theories, reports Rebecca Ruiz for Mashable. “Rand and his co-authors imagine a future in which a chatbot might be connected to social media accounts as a way to counter conspiracy theories circulating on a platform,” explains Ruiz. “Or people might find a chatbot when they search online for information about viral rumors or hoaxes thanks to keyword ads tied to certain conspiracy search terms.” 

Forbes

Researchers from MIT and elsewhere have created an AI Risk Repository, a free retrospective analysis detailing over 750 risks associated with AI, reports Tor Constantino for Forbes. “If current understanding is fragmented, policymakers, researchers, and industry leaders may believe they have a relatively complete shared understanding of AI risks when they actually don’t,” says Peter Slattery, a research affiliate at the MIT FutureTech project. “This sort of misconception could lead to critical oversights, inefficient use of resources, and incomplete risk mitigation strategies, which leave us more vulnerable.”