Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1068 news clips related to this topic.
Show:

Bloomberg

Prof. Daron Acemoglu speaks with Bloomberg reporters John Lee and Katia Dmitrieva about the social and economic impacts of AI. “We don’t know where the future lies,” says Acemoglu. “There are many directions, the technology is malleable, we can make different choices.” 

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”

Fortune

Researchers from MIT and elsewhere have found that LLM-based AI chatbots are more effective at implanting false memories than “other methods of trying to implant memories, such as old-fashioned surveys with leading questions or conversations with a pre-scripted chatbot,” reports Jeremy Kahn for Fortune. “It seems the ability of the generative AI chatbot to shape each question based on the previous answers of the test subjects gave it particular power,” explains Kahn.

Forbes

Researchers from MIT and elsewhere have created an AI Risk Repository, a free retrospective analysis detailing over 750 risks associated with AI, reports Tor Constantino for Forbes. “If current understanding is fragmented, policymakers, researchers, and industry leaders may believe they have a relatively complete shared understanding of AI risks when they actually don’t,” says Peter Slattery, a research affiliate at the MIT FutureTech project. “This sort of misconception could lead to critical oversights, inefficient use of resources, and incomplete risk mitigation strategies, which leave us more vulnerable.”

Forbes

In an article for Forbes, Robert Clark spotlights how MIT researchers developed a new model to predict irrational behaviors in humans and AI agents in suboptimal conditions. “The goal of the study was to better understand human behavior to improve collaboration with AI,” Clark writes. 

Forbes

Forbes contributor Peter High spotlights research by Senior Research Scientist Peter Weill, covering real-time decision-making, the importance of digitally savvy leadership and the potential of generative AI. High notes Weill’s advice to keep up. “The gap between digitally advanced companies and those lagging is widening, and the consequences of not keeping pace are becoming more severe. ‘You can’t get left behind on being real time,’ he warned.”

New Scientist

Researchers from MIT and Northwestern University have developed some guidelines for how to spot deepfakes, noting “there is no fool-proof method that always works,” reports Jeremy Hsu for New Scientist

Forbes

Senior lecturer Paul McDonagh-Smith speaks with Forbes reporter Joe Mckendrick about the history behind the AI hype cycle. “While AI technologies and techniques are at the forefront of today’s technological innovation, it remains a field defined — as it has from the 1950s — by both significant achievements and considerable hype," says McDonagh-Smith. 

Fortune

MIT alumni Mike Ng and Nikhil Buduma founded Ambiance, which has developed an “AI-powered platform geared towards improving documentation processes in medicine,” reports Fortune’s Allie Garfinkle. “In a world filled with AI solutions in search of a problem, Ambience is focusing on a pain point that just about any doctor will attest to (after all, who likes filling out paperwork?),” writes Garfinkle. 

Forbes

After meeting at MIT, alumni Honghao Deng and Jiani Zeng founded Butr, which makes anonymous people-detecting sensors to measure movement inside buildings, reports Zoya Hasan for Forbes. The sensors could help address staffing challenges in senior living communities, and alert staff of falls or other medical issues. 

 

Bloomberg

Prof. William Deringer speaks with David Westin on Bloomberg’s Wall Street Week about the power of early spreadsheet programs in the 1980s financial services world. When asked to compare today’s AI in the context of workplace automation fears, he says “one thing we know from the history of technology - and certainly the history of calculation tools that I like to study – is that the automation of some of these calculations…doesn’t necessarily lead to less work.”

Forbes

Researchers at MIT have developed “a publicly available database, culled from reports, journals, and other documents to shed light on the risks AI experts are disclosing through paper, reports, and other documents,” reports Jon McKendrick for Forbes. “These benchmarked risks will help develop a greater understanding the risks versus rewards of this new force entering the business landscape,” writes McKendrick. 

Forbes

Edwin Olson '00, MEng '01, PhD '08 founded May Mobility, an autonomous vehicle company that uses human autonomous vehicle operators on its rides, reports Gus Alexiou for Forbes. “May Mobility is focused above all else on gradually building up the confidence of its riders and community stakeholders in the technology over the long-term,” explains Alexiou. “This may be especially true for certain more vulnerable sections of society such as the disability community where the need for more personalized and affordable forms of transportation is arguably greatest but so too is the requirement for robust safety and accessibility protocols.”

Wired

A new database of AI risks has been developed by MIT researchers in an effort to help guide organizations as they begin using AI technologies, reports Will Knight for Wired. “Many organizations are still pretty early in that process of adopting AI,” meaning they need guidance on the possible perils, says Research Scientist Neil Thompson, director of the FutureTech project.   

TechCrunch

TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.”