Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 1107 news clips related to this topic.
Show:

Forbes

Researchers at MIT have developed a “new type of transistor using semiconductor nanowires made up of gallium antimonide and iridium arsenide,” reports Alex Knapp for Forbes. “The transistors were designed to take advantage of a property called quantum tunneling to move electricity through transistors,” explains Knapp. 

New Scientist

Researchers at MIT have developed a new virtual training program for four-legged robots by taking “popular computer simulation software that follows the principles of real-world physics and inserting a generative AI model to produce artificial environments,” reports Jeremy Hsu for New Scientist. “Despite never being able to ‘see’ the real world during training, the robot successfully chased real-world balls and climbed over objects 88 per cent of the time after the AI-enhanced training,” writes Hsu. "When the robot relied solely on training by a human teacher, it only succeeded 15 per cent of the time.”

Financial Times

Research Scientist Nick van der Meulen speaks with Financial Times reporter Bethan Staton about how automation could be used to help employers plug the skills gap. “You can give people insight into how their skills stack up . . . you can say this is the level you need to be for a specific role, and this is how you can get there,” says van der Meulen. “You cannot do that over 80 skills through active testing, it would be too costly.”

Mashable

Graduate student Aruna Sankaranarayanan speaks with Mashable reporter Cecily Mauran the impact of political deepfakes and the importance of AI literacy, noting that the fabrication of important figures who aren’t as well known is one of her biggest concerns. “Fabrication coming from them, distorting certain facts, when you don’t know what they look like or sound like most of the time, that’s really hard to disprove,” says Sankaranarayanan.  

TechCrunch

Researchers at MIT have developed a new model for training robots dubbed Heterogeneous Pretrained Transformers (HPT), reports Brain Heater for TechCrunch. The new model “pulls together information from different sensors and different environments,” explains Heater. “A transformer was then used to pull together the data into training models. The larger the transformer, the better the output. Users then input the robot design, configuration, and the job they want done.” 

Forbes

Postdoctoral associate Peter Slattery speaks with Forbes reporter Tor Constantino about the importance of developing new technologies to easily distinguish AI generated content. “I think we need to be very careful to ensure that watermarks are robust against tampering and that we do not have scenarios where they can be faked,” explains Slattery. “The ability to fake watermarks could make things worse than having no watermarks as it would give the illusion of credibility.” 

TechAcute

MIT researchers have developed a new training technique called Heterogeneous Pretrained Transformers (HPT) that could help make general-purpose robots more efficient and adaptable, reports Christopher Isak for TechAcute. “The main advantage of this technique is its ability to integrate data from different sources into a unified system,” explains Isak. “This approach is similar to how large language models are trained, showing proficiency across many tasks due to their extensive and varied training data. HPT enables robots to learn from a wide range of experiences and environments.” 

VICE

Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela. 

TechCrunch

Researchers at MIT have found that commercially available AI models, “were more likely to recommend calling police when shown Ring videos captured in minority communities,” reports Kyle Wiggers for TechCrunch. “The study also found that, when analyzing footage from majority-white neighborhoods, the models were less likely to describe scenes using terms like ‘casing the property’ or ‘burglary tools,’” writes Wiggers. 

Bio-It World

Researchers at MIT have developed GenSQL, a new generative AI system that can be used “to ease answering data science questions,” reports Allison Proffitt for Bio-It World. “Look how much better data science could be if it was easier to use,” says Research Scientist Mathieu Huot. “It’s not perfect yet, but we believe it’s quite an improvement over other options.” 

NPR

Prof. Daron Acemoglu speaks with Greg Rosalsky of NPR’s Planet Money about a recent survey that claims "almost 40% of Americans, ages 18 to 64, have used generative AI." "My concern with their numbers is that it does not distinguish fundamentally productive uses of generative AI from occasional/frivolous uses," says Acemoglu. 

The Washington Post

Prof. David Autor speaks with Washington Post reporter Cat Zakrzewski about the anticipated impact of AI in various industries. “We are just learning how to use AI and what it's good for, and it will take a while to figure out how to use it really productively,” says Autor. 

Forbes

Researchers at MIT have found large language models “often struggle to handle more complex problems that require true understanding,” reports Kirimgeray Kirimli for Forbes. “This underscores the need for future versions of LLMs to go beyond just these basic, shared capabilities,” writes Kirimli. 

Bloomberg

Prof. Daron Acemoglu speaks with Bloomberg reporter Jeran Wittenstein about the current state of AI and the technology’s economic potential. “You need highly reliable information or the ability of these models to faithfully implement certain steps that previously workers were doing,” says Acemoglu of the state of current large language models. “They can do that in a few places with some human supervisory oversight” — like coding — “but in most places they cannot. That’s a reality check for where we are right now." 

Fortune

Researchers at MIT have developed “Future You,” a generative AI chatbot that enables users to speak with potential older versions of themselves, reports Sharon Goldman for Fortune. The tool “uses a large language model and information provided by the user to help young people ‘improve their sense of future self-continuity, a psychological concept that describes how connected a person feels with their future self,’” writes Goldman. “The researchers explained that the tool cautions users that its results are only one potential version of their future self, and they can still change their lives,” writes Goldman.