Speaking at the “Generative AI: Shaping the Future” symposium on November 28, the opening event of Generative AI Week at MIT, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against unduly overestimating the capabilities of this emerging technology. Critical, which increasingly supports powerful tools like OpenAI's ChatGPT and Google's Bard.
“Noise leads to arrogance, arrogance leads to arrogance, and arrogance leads to failure,” warned Brooks, who is also a professor emeritus at MIT, former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust. .AI.
“No technology has ever been superior to anything else,” he added.
The symposium attracted hundreds to Attendees from academia and industry in the institute's Kresge Hall were full of messages of hope about the opportunities that generative AI offers to make the world a better place, including through art and creativity, and interspersed with cautionary tales about what could happen if these AI tools are not developed. Develop it responsibly.
Generative AI is a term that describes machine learning models that learn how to create new textures that resemble the data they were trained on. These models have demonstrated some amazing abilities, such as the ability to produce human-like creative writing, translate languages, create functional computer codes, or create photo-realistic images from text prompts.
In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects by faculty and students to use generative AI to make a positive impact in the world. For example, the work of the Axiom Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.
The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered around how artificial intelligence will change the lives of people across society.
By hosting Generative AI Week, MIT hopes to not only showcase this kind of innovation, but also generate “collaborative clashes” among attendees, Kornbluth said.
She told the audience that collaboration between academics, policymakers and industry will be critical if we are to safely integrate rapidly evolving technology like generative AI in ways that are humane and help humans solve problems.
“Frankly, I cannot think of a challenge more aligned with MIT’s mission. It is a profound responsibility, but I have every confidence that we can meet it, if we face it head-on, and if we face it as a community,” she said.
While generative AI holds the potential to help solve some of the planet's most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Ross in her opening remarks. She said the issue is no longer whether we can make machines that produce new content, but rather how we can use these tools to boost businesses and ensure sustainability.
“Today, we will discuss the possibility of a future in which generative AI is not just a technological marvel, but a source of hope and a force for good,” said Ross, who is also the Andrew and Erna Viterbi Professor at UCLA. Department of Electrical Engineering and Computer Science.
But before the discussion delved into the capabilities of generative AI, attendees were first asked to reflect on their own humanity, while MIT professor Joshua Bennett read an original poem.
Bennett, a professor in the MIT Department of Literature and Distinguished Chair in the Humanities, was asked to write a poem about what it means to be human, and he drew inspiration from his daughter, who had been born three weeks earlier.
The poem tells of his experiences as a boy watching Star Trek With his father, he touched on the importance of passing on traditions to the next generation.
In his keynote, Brooks set out to unravel some of the deep scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.
First, we sought to dispel some of the mystery surrounding generative AI tools like ChatGPT by explaining the basics of how this large language model works. For example, ChatGPT generates text one word at a time by selecting the next word in the context of what you've already typed. While a human might write a story by thinking in complete phrases, ChatGPT only focuses on the next word, Brooks explained.
ChatGPT 3.5 is built on a machine learning model with 175 billion parameters and exposed to billions of web text pages during training. (The latest version, ChatGPT 4, is even bigger.) It learns the associations between words in this huge body of text and uses this knowledge to suggest which word might come next when it's given a prompt.
The model has demonstrated some amazing abilities, such as the ability to write a sonnet about robots in the style of Shakespeare's famous Sonnet 18. During his talk, Brooks showed off the sonnet he asked ChatGPT to write alongside his own.
But while researchers still don't fully understand how these models work, Brooks assured the audience that the seemingly amazing capabilities of generative AI aren't magic, and that doesn't mean these models can do anything.
His biggest concerns about generative AI aren't about models that could one day outperform human intelligence. Instead, he is deeply concerned about researchers who might waste decades of excellent work that was close to a breakthrough, just to jump on shiny new developments in generative AI; venture capital firms that blindly move toward technologies that can deliver the highest profit margins; Or the possibility that an entire generation of engineers will forget about other forms of software and artificial intelligence.
Ultimately, those who believe generative AI can solve the world's problems and those who believe it will only create new ones have at least one thing in common: Both groups tend to overestimate the technology, he said.
“What's the conceit with generative AI? The conceit is that it will somehow lead to artificial general intelligence. It's not, in and of itself,” Brooks said.
Following Brooks' presentation, a group of MIT faculty spoke about their work with generative AI and participated in a panel discussion about future developments, important but yet to be explored research topics, and AI regulation and policy challenges.
The panel consisted of Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology (MIT) and a CSAIL member; Antonio Torralba, Delta Electronics Professor at EECS and CSAIL member; Yves Fedorenko, associate professor of brain and cognitive sciences and a researcher at the McGovern Institute for Brain Research at MIT; and Armando Solar Lezama, Distinguished Professor of Computing and Associate Director of CSAIL. The session was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor at EECS and a member of CSAIL.
Participants discussed several potential future research directions on generative AI, including the possibility of integrating perceptual systems, drawing on human senses such as touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure that generative AI tools are produced and deployed responsibly.
“One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk that there will be a lot of products coming out that claim to do miraculous things but in the long run they can be very harmful,” Solar Lezama said.
The morning session concluded with an excerpt from the 1925 science fiction novel Metropolis, read by Joey Ma, a physics and theater arts student, followed by a roundtable discussion on the future of generative artificial intelligence. The discussion included Joshua Tenenbaum, professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katbi, the Thuan and Nicole Pham Professor of EECS and a principal investigator at CSAIL and the Jameel Clinic at MIT; Max Tegmark, professor of physics; It was moderated by Daniela Ross.
One focus of the discussion was the possibility of developing generative AI models that could go beyond what we as humans can do, such as tools that can sense someone's emotions using electromagnetic signals to understand how a person's breathing and heart rate change.
But one of the keys to safely integrating AI like this into the real world is making sure we can trust it, Tegmark said. He added that if we know that an AI tool will meet the specifications we insist on, “we no longer have to be afraid to build really powerful systems that go out and do things for us in the world.”