• John Ennis

Eye on AI - July 23rd, 2021

Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!

This week, we’ll be looking at a study that reveals a new method for developing imagination in AI, how it could be used with new applications, and how it might one day help AI understand the concepts of “same” and “different” to achieve human-like intelligence.


AI’s Imagination Problem May Soon Be at An End

Two weeks ago, we touched upon AI’s struggles with the concepts of “same” and “different,” which many researchers believe prevents AI from developing a truly human-like intelligence. This week, with promising new research emerging on AI imagination emerging out of USC, researchers may be one step closer to overcoming those struggles.

Imagination, like “same” and “different”, is a concept with which AI has struggled mightily. As far back as 2017, Google claimed to have developed AI imagination, but really what it had done was develop an extremely advanced form of predictability. Predictability uses data to derive likely future outcomes. Imagination, on the other hand, uses data to derive outcomes with no basis on likelihood. Think of a child gazing at the ceiling of a cathedral, then envisioning a team of gymnasts swinging from the chains dangling from the rafters, or a person sitting in traffic on a hot day, covered in sweat with the A/C blowing hot, picturing his car suddenly transforming into an iglu. Neither outcome is likely, but rather are possibilities based on extrapolation.

“[Imagination] is one of the long-sought goals of AI: creating models that can extrapolate,” writes USC’s editorial team. “This means that, given a few examples, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasn't seen before. But machines are most commonly trained on sample features, pixels for instance, without taking into account the object's attributes.”

The USC research team’s study, published in the ScienceDaily article “Enabling the 'imagination' of artificial intelligence” earlier this month, attempted to overcome AI’s imagination limitations using a concept called disentanglement, which is an unsupervised learning technique that breaks down individual features into narrowly defined variables and encodes them as separate dimensions. The use of disentanglement isn’t new in AI. For instance, it’s commonly used in deepfakes: human face movements and identity are disentangled to allow a computer to synthesize new images and videos that substitute the original model's identity with someone else’s, keeping the original movements. USC’s researchers approached disentanglement in a similar way by taking a group of sample images (as opposed to one sample at a time, which is more common), mining their similarities, then recombing them into new, or “imagined”, images.

“This is similar to how we as humans extrapolate: when a human sees a color from one object, we can easily apply it to any other object by substituting the original color with the new one,” continues USC’s editorial team. “Using their technique, the group generated a new dataset containing 1.56 million images that could help future research in the field.”

While disentanglement isn’t new in AI, what is new about USC’s application is that the framework, which, according to USC’s researchers, could be compatible with nearly any type of data or knowledge. Doctors and biologists could use it to disentangle medicine function from other properties and recombining them to synthesize new medicine (with the recent human proteins map being released by DeepMind, this use case could become extremely effective); driverless vehicle AI could use it to imagine and avoid dangerous scenarios previously unseen during training; can designers could use it to create new models based on the materials and development methods on-hand. The applications are practically endless.

Does USC’s findings show that AI is capable of understanding the concepts of “same” and “different,” and thus can develop human-like intelligence? Not exactly. But it does provide us with another example of AI’s capability to extrapolate data beyond pixels and predictability, which is saying something. While AI still can’t fully understand “same” and “different” concepts, it’s easy to imagine how it soon could.

Other News

That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!