top of page
  • Writer's pictureJohn Ennis

Eye on AI - July 9th, 2021

Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!

 

Can machines become truly intelligent? That’s the question we’ll be addressing this week as we delve into AI’s difficulty in grasping the concepts of “same” and “different”, which are essential to the development of intelligent machines.


Enjoy!


Despite Benefits, AI Struggles with Concept Most Animals Understand



There’s no question that AI systems are exceptionally proficient at analyzing data and identifying complex patterns invisible to the human eye. Yet according to a recent Quantum Magazine article, titled “Same or Different? The Question Flummoxes Neural Networks,” even the most advanced AI stumbles when it comes to interpreting the basic concepts of “same” and “different”.


“Not only do you and I succeed at the same-different task, but a bunch of nonhuman animals do, too — including ducklings and bees,” said Chaz Firestone, who studies visual cognition at Johns Hopkins University. John Pavlus, the author of the article, adds, “Kids never had to relearn the rules [to Sesame Street’s “One of These Things Is Not Like the Other” game]. Understanding the distinction between ‘same and “different’ was enough. Machines have a much harder time.”

You’d think that AI systems, with all their training capabilities and advances in recent years, would have mastered something as simple as same-different concepts by now. But that simply isn’t the case. Even convolutional neural networks, or CNNs, which are among the most powerful AI systems, struggle when it comes to same-different tasks.


Take the “prove you’re not a robot” picture test online as an example. The test, which asks you to click photos containing, say, a train or a walkway, is meant to distinguish whether you’re a robot or a human. Humans have little difficulty with such tests. AI, on the other hand, can never seem to pick all the right pictures.


“Same-different relations have dogged neural networks since at least 2013, when the pioneering AI researcher Yoshua Bengio and his co-author, Caglar Gulcehre, showed that a CNN could not tell if groups of blocky, Tetris-style shapes were identical or not,” adds Pavlus. “... A recent survey of research on same-different reasoning also stressed this point. ‘Without the ability to recognize sameness,’ the [survey] authors wrote, ‘there would seem to be little hope of realizing the dream of creating truly intelligent visual reasoning machines.’”

As sameness identification continued to stump AI, research teams tested whether same-different concepts could actually be taught to machines. One such team used a Synthetic Visual Reasoning Test (SVRT), a collection of simple patterns designed to probe neural networks’ abstract reasoning skills, to test CNN accuracy on same-different tasks. “Patterns consisted of pairs of irregular shapes drawn in black outline on a white square,” writes Pavlus. “If the pair was identical in shape, size, and orientation, the image was classified “same”; otherwise, the pair was labeled ‘different.’” The results: the CNNs that were trained on many examples of these patterns distinguished “same” from “different” with up to 75% accuracy. However, when the images were made larger or placed apart from one another, their accuracy plummeted. It seems that rather than learning the relational concept of “sameness,” the neural networks had become fixated on features, such as sizes, colors, shapes, etc. Other studies showed similar results.


While AI systems haven’t cracked the same-difference code, hope is not altogether lost. Many researchers believe a breakthrough is inevitable. And just last year, researchers Christina Funke and Judy Borowski of the University of Tübingen showed that by increasing the number of layers in a neural network from six to 50, they were able to raise a CNN’s accuracy to above 90% on the SVRT same-different task. Yet their study didn’t test how well this “deeper” CNN performed on examples outside the SVRT data set, which means it didn’t reveal any new evidence that CNNs could generalize the concepts of same and different outside a single set of rules.


“I recommend being very careful when claiming that deep convolutional neural networks in general cannot learn the concept,” says Funke. Adam Santoro of DeepMind agrees: “Absence of evidence is not necessarily evidence of absence, and this has historically been true of neural networks.”

Other researchers aren’t so optimistic. Guillermo Puebla, a cognitive scientist at the University of Bristol who performed his own same-different study with AI, believes “it’s very unlikely that CNNs are going to solve this problem [of discriminating same from different]. They might be part of the solution if you add something else. But by themselves? It doesn’t look like it.”


Opinions vary on whether machines will ever crack the same-different code. However, most researchers seem to agree that in order for AI to have human-like intelligence, it must first master same-different concepts.



Other News


 

That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!


bottom of page