“Nature is trying to tell us something here, which is, this doesn’t really work, but the field is so believing its own press clippings, that it just can’t see that,” he adds.
Even de Freitas’s DeepMind colleagues, Jackie Kay and Scott Reed, who worked with him on Gato, were more circumspect when I asked them directly about his claims. When asked about whether Gato was heading towards AGI, they wouldn’t be drawn. “I don’t actually think it’s really feasible to make predictions with these kinds of things. I try to avoid that. It’s like predicting the stock market,” said Kay.
Reed said the question was a difficult one. “I think most machine learning people will studiously avoid answering. Very hard to predict, but, you know, hopefully we get there someday.”
In a way, the fact that DeepMind called Gato a “generalist” might have made it a victim of the AI sector’s excessive hype around AGI. The AI systems of today are called “narrow” AI, meaning they can only do a specific, restricted set of tasks such as generate text.
Some technologists, including at Deepmind, think that one day humans will develop “broader” AI systems that will be able to function as well or even better than humans. Some call this artificial “general” intelligence. Others say it is like “belief in magic.“ Many top researchers, such as Meta’s chief AI scientist Yann LeCun question whether it is even possible at all.
Gato is a “generalist” in the sense that it can do many different things at the same time. But that is a world apart from a “general” AI that can meaningfully adapt to new tasks that are different from what the model was trained on, says MIT’s Andreas. “We’re still quite far from being able to do that.”
Making models bigger will also not address the issue that models don’t have “lifelong learning”, meaning they can be taught things once and they will understand all of the implications and use it to inform all of the other decisions that they are going to make, he says.
The hype around tools like Gato is harmful for the general development of AI, argues Emmanuel Kahembwe, an AI/robotics researcher and part of the Black in AI organization co-founded by Timnit Gebru. “There are many interesting topics that are left to the side, that are underfunded, that deserve more attention, but that’s not what the big tech companies and the bulk of researchers in such tech companies are interested in,” he says.
Tech companies ought to take a step back and take stock of why they are building what they are building, says Vilas Dhar, president of the Patrick J. McGovern Foundation, a charity that funds AI projects “for good.”
“AGI speaks to something deeply human—the idea that we can become more than we are, by building tools that propel us to greatness,” he says. “And that’s really nice, except it also is a way to distract us from the fact that we have real problems that face us today that we should be trying to address using AI.”