Skip to content ↓

Topic

Human-computer interaction

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 76 news clips related to this topic.
Show:

VICE

Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela. 

TechCrunch

Researchers at MIT have found that commercially available AI models, “were more likely to recommend calling police when shown Ring videos captured in minority communities,” reports Kyle Wiggers for TechCrunch. “The study also found that, when analyzing footage from majority-white neighborhoods, the models were less likely to describe scenes using terms like ‘casing the property’ or ‘burglary tools,’” writes Wiggers. 

Fortune

Researchers at MIT have developed “Future You,” a generative AI chatbot that enables users to speak with potential older versions of themselves, reports Sharon Goldman for Fortune. The tool “uses a large language model and information provided by the user to help young people ‘improve their sense of future self-continuity, a psychological concept that describes how connected a person feels with their future self,’” writes Goldman. “The researchers explained that the tool cautions users that its results are only one potential version of their future self, and they can still change their lives,” writes Goldman. 

Forbes

In an article for Forbes, Robert Clark spotlights how MIT researchers developed a new model to predict irrational behaviors in humans and AI agents in suboptimal conditions. “The goal of the study was to better understand human behavior to improve collaboration with AI,” Clark writes. 

TechCrunch

TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.” 

NPR

Prof. Sherry Turkle joins Manoush Zomorodi of NPR’s "Body Electric" to discuss her latest research on human relationships with AI chatbots, which she says can be beneficial but come with drawbacks since artificial relationships could set unrealistic expectations for real ones. "What AI can offer is a space away from the friction of companionship and friendship,” explains Turkle. “It offers the illusion of intimacy without the demands. And that is the particular challenge of this technology." 

Popular Science

Tomás Vega SM '19 is CEO and co-founder of Augmental, a startup helping people with movement impairments interact with their computer devices, reports Popular Science’s Andrew Paul. Seeking to overcome the limitations of most brain-computer interfaces, the company’s first product is the MouthPad, leveraging the tongue muscles.“Our hope is to create an interface that is multimodal, so you can choose what works for you,” said Vega. “We want to be accommodating to every condition.”

Popular Mechanics

Researchers at CSAIL have created three “libraries of abstraction” – a collection of abstractions within natural language that highlight the importance of everyday words in providing context and better reasoning for large language models, reports Darren Orf for Popular Mechanics. “The researchers focused on household tasks and command-based video games, and developed a language model that proposes abstractions from a dataset,” explains Orf. “When implemented with existing LLM platforms, such as GPT-4, AI actions like ‘placing chilled wine in a cabinet' or ‘craft a bed’ (in the Minecraft sense) saw a big increase in task accuracy at 59 to 89 percent, respectively.”

The Hill

The Hill reporter Tobias Burns spotlights the efforts of a number of MIT researchers to better understand the impact of generative AI on productivity in the workforce. One research study “looked as cases where AI helped improved productivity and worker experience specifically in outsourced settings, such as call centers,” explains Burns. Another research study explored the impact of AI programs, such as ChatGPT, among employees. 

Quanta Magazine

MIT researchers have developed a new procedure that uses game theory to improve the accuracy and consistency of large language models (LLMs), reports Steve Nadis for Quanta Magazine. “The new work, which uses games to improve AI, stands in contrast to past approaches, which measured an AI program’s success via its mastery of games,” explains Nadis. 

TechCrunch

Researchers at MIT have found that large language models mimic intelligence using linear functions, reports Kyle Wiggers for TechCrunch. “Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them,” writes Wiggers. 

Politico

MIT researchers have found that “when an AI tool for radiologists produced a wrong answer, doctors were more likely to come to the wrong conclusion in their diagnoses,” report Daniel Payne, Carmen Paun, Ruth Reader and Erin Schumaker for Politico. “The study explored the findings of 140 radiologists using AI to make diagnoses based on chest X-rays,” they write. “How AI affected care wasn’t dependent on the doctors’ levels of experience, specialty or performance. And lower-performing radiologists didn’t benefit more from AI assistance than their peers.”

The Boston Globe

Prof. Daniela Rus, director of CSAIL, speaks with Boston Globe reporter Evan Sellinger about her new book, “The Heart and the Chip: Our Bright Future With Robots,” in which she makes the case that in the future robots and humans will be able to team up to create a better world. “I want to highlight that machines don’t have to compete with humans, because we each have different strengths. Humans have wisdom. Machines have speed, can process large numbers, and can do many dull, dirty, and dangerous tasks,” Rus explains. “I see robots as helpers for our jobs. They’ll take on the routine, repetitive tasks, ensuring human workers focus on more complex and meaningful work.”

The Daily Beast

MIT researchers have developed a new technique “that could allow most large language models (LLMs) like ChatGPT to retain memory and boost performance,” reports Tony Ho Tran for the Daily Beast. “The process is called StreamingLLM and it allows for chatbots to perform optimally even after a conversation goes on for more than 4 million words,” explains Tran.