A faster, better way to prevent an AI chatbot from giving toxic responses
Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output.
Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output.
Screen-reader users can upload a dataset and create customized data representations that combine visualization, textual description, and sonification.
With help from a large language model, MIT engineers enabled robots to self-correct after missteps and carry on with their chores.
Researchers demonstrate a technique that can be used to probe a model to see what it knows about new subjects.
After acquiring data science and AI skills from MIT, Jospin Hassan shared them with his community in the Dzaleka Refugee Camp in Malawi and built pathways for talented learners.
Researchers developed a simple yet effective solution for a puzzling problem that can worsen the performance of large language models such as ChatGPT.
June Odongo uses free, online MIT courses to train high-quality candidates, making them job-ready.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
Master’s students Irene Terpstra ’23 and Rujul Gandhi ’22 use language to design new integrated circuits and make it understandable to robots.
MIT researchers develop a customized onboarding process that helps a human learn when a model’s advice is trustworthy.
Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.
How do powerful generative AI systems like ChatGPT work, and what makes them different from other types of artificial intelligence?
By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.
AI models that prioritize similarity falter when asked to design something completely new.
Some researchers see formal specifications as a way for autonomous systems to "explain themselves" to humans. But a new study finds that we aren't understanding.