Skip to content ↓

Topic

Algorithms

Download RSS feed: News Articles / In the Media / Audio

Displaying 16 - 30 of 553 news clips related to this topic.
Show:

TechCrunch

Prof. Daniela Rus, director of CSAIL, and research affiliates Ramin Hasani, Mathias Lechner, and Alexander Amini have co-founded Liquid AI, a startup building a general-purpose AI system powered by a liquid neural network, reports Kyle Wiggers for TechCrunch. “Accountability and safety of large AI models is of paramount importance,” says Hasani. “Liquid AI offers more capital efficient, reliable, explainable and capable machine learning models for both domain-specific and generative AI applications." 

Nature

MIT researchers have “used an algorithm to sort through millions of genomes to find new, rare types of CRISPR systems that could eventually be adapted into genome-editing tools,” writes Sara Reardon for Nature. “We are just amazed at the diversity of CRISPR systems,” says Prof. Feng Zhang. “Doing this analysis kind of allows us to kill two birds with one stone: both study biology and also potentially find useful things.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

Popular Science

Prof. Yoon Kim speaks with Popular Science reporter Charlotte Hu about how large language models like ChatGPT operate. “You can think of [chatbots] as algorithms with little knobs on them,” says Kim. “These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

CNN

Researchers at MIT have developed “PhotoGuard,” a tool that can be used to protect images from AI manipulation, reports Catherine Thorbecke for CNN. The tool “puts an invisible ‘immunization’ over images that stops AI models from being able to manipulate the picture,” writes Thorbecke.

Forbes

At CSAIL’s Imagination in Action event, Prof. Stefanie Jegelka’s presentation provided insight into “the failures and successes of neural networks and explored some crucial context that can help engineers and other human observers to focus in on how learning is happening,” reports research affiliate John Werner for Forbes.

Forbes

Prof. Jacob Andreas explored the concept of language guided program synthesis at CSAIL’s Imagination in Action event, reports research affiliate John Werner for Forbes. “Language is a tool,” said Andreas during his talk. “Not just for training models, but actually interpreting them and sometimes improving them directly, again, in domains, not just involving languages (or) inputs, but also these kinds of visual domains as well.”

Forbes

Prof. Daniela Rus, director of CSAIL, writes for Forbes about Prof. Dina Katabi’s work using insights from wireless systems to help glean information about patient health. “Incorporating continuous time data collection in healthcare using ambient WiFi detectable by machine learning promises an era where early and accurate diagnosis becomes the norm rather than the exception,” writes Rus.

ABC News

Researchers from MIT and Massachusetts General Hospital have developed “Sybil,” an AI tool that can detect the risk of a patient developing lung cancer within six years, reports Mary Kekatos for ABC News. “Sybil was trained on low-dose chest computer tomography scans, which is recommended for those between ages 50 and 80 who either have a significant history of smoking or currently smoke,” explains Kekatos.

Forbes

During her talk at CSAIL’s Imagination in Action event, Prof. Daniela Rus, director of CSAIL, explored the promise of using liquid neural networks “to solve some of AI’s notorious complexity problems,” writes research affiliate John Werner for Forbes. “Liquid networks are a new model for machine learning,” said Rus. “They're compact, interpretable and causal. And they have shown great promise in generalization under heavy distribution shifts.”

Forbes

In an article for Forbes, research affiliate John Werner spotlights Prof. Dina Katabi and her work showcasing how AI can boost the capabilities of clinical data. “We are going to collect data, clinical data from patients continuously in their homes, track the symptoms, the evolution of those symptoms, and process this data with machine learning so that we can get insights before problems occur,” says Katabi.

WCVB

Prof. Regina Barzilay speaks with Nicole Estephan of WCVB-TV’s Chronicle about her work developing new AI systems that could be used to help diagnose breast and lung cancer before the cancers are detectable to the human eye.

Science

In conversation with Matthew Huston at Science, Prof. John Horton discusses the possibility of using chatbots in research instead of humans. As he explains, a change like that would be similar to the transition from in-person to online surveys, “"People were like, ‘How can you run experiments online? Who are these people?’ And now it’s like, ‘Oh, yeah, of course you do that.’”