The global pandemic has been cited as a “wake-up call” for many things — the environment, economic and social rights, and general global inequalities. However, scientist, author, and Gary Marcus thinks that the crisis should also be considered a wake-up call for too.

Speaking at the virtual Intelligent Health AI conference yesterday, Marcus lamented decades of missed opportunities to build a more robust artificial intelligence, arguing that too much AI resource has been put toward technologies that don’t really help the world in any meaningful way.

“We would like AI that could read and synthesize the vast, quickly growing medical literature, for example, about COVID-19,” he said. “We want our AI to be able to reason causally, we want it to be able to weed out misinformation. We want to be able to guide robots to keep humans out of dangerous situations, care for the elderly, deliver packages to the door. With AI having been around [for] 60 years, I don’t think it’s unreasonable to wish that we might have had some of these things by now. But the AI that we actually have, like playing games, transcribing syllables, and vacuuming floors, it’s really pretty far away from the things that we’ve been promised.”

One of the underlying issues, according to Marcus, is that we’re putting too much focus on deep learning.

“To understand how to bring AI to the next level, we first need to understand where we are, and where we are right now is in the era of deep learning, where deep learning is the best technique, and the dominant technique, and maybe one that’s getting too much attention,” Marcus said.

Marcus has a PhD in cognitive science from MIT, and has been a professor of psychology and neural science at New York University for the past 20 years. Throughout that period, he has also written several books, and in 2015 he cofounded Geometric Intelligence, a stealth AI startup which was swiftly snapped up by Uber to serve as the foundation of its new AI Labs. Marcus stepped down as head of Uber’s new unit after just a few months, and he later went on to found Robust.ai to build an “industrial-grade cognitive engine” for robots.

The problem

Deep learning is a branch of machine learning based on artificial neural networks that try to mimic how the human brain works. Deep learning isn’t short of critics, and the inherent weaknesses are well understood. Large swathes of data (images, audio, text, consumer actions, etc) train the deep learning system to recognize patterns, which can be used to help Netflix recommend video content or autonomous cars identify pedestrians and road signs. But slight changes to the data input, changes that a human may (or may not) be able to spot, can confuse even the most advanced deep learning systems. An example that Marcus uses is that you can train a deep learning system to identify elephants — but show it a silhouette of an elephant, one that a human would easily recognize, and the AI would likely fail.

“The reality is that deep learning works best in a regime of big data, but it’s worse in unusual cases… so if you have a lot of routine data then you’re fine,” Marcus said. “But if you have something unusual and important, which is everything about COVID since there is no historical data, then deep learning is just not a very good tool.”

Marcus also reiterated points from his Rebooting AI book which was published last year, noting that the AI world needs to refocus its efforts on a more hybrid “knowledge-driven” approach. One that incorporates deep learning, which is good at some types of learning but “is terrible for abstraction,” and classical AI, systems capable of reasoning and encoding knowledge.

Whatever the best path forward is, Marcus’s main takeaway as far as COVID-19 is concerned, is that the pandemic should serve to motivate the AI world to rethink the problems that they’re ultimately trying to solve.

“COVID-19 is a wake up call, it’s motivation for us to stop building AI for ad tech, news feeds, and things like that, and make AI that can really make a difference,” he said. “With better AI, we might have computers that can read, digest, filter, and synthesize all the vast growing literature [around COVID-19]. Robots could take on a lot of the risks that human health care workers are facing. To get to that level of AI, that can operate in trustworthy ways even in a novel environment, we’re going to need to work towards building systems with deep understanding not just deep learning.”

**For Original Source – Click Here**