top of page
  • Writer's pictureJohn Ennis

Eye on AI - July 19th, 2019

Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!


This week we cover two main topics: new technologies that could lead to better sensory profile predictions and further developments in the robot farming industry.

New Developments Could Assist in Sensory Profile Prediction

We begin the week with more AI-related tasting news. According to TechWire News, in their article “In good taste? IBM researchers create AI-assisted e-tongue,” the IBM scientists responsible for Watson Artificial Intelligence have created IBM Hypertaste, a portable, AI-assisted electronic tongue for chemical sensing that identifies liquids without the need for high-end laboratories.

“It resembles our natural senses of taste and smell where we don’t have a receptor for each molecule occurring in every kind of food or drink,” writes Patrick Ruch on behalf of IBM researchers in Zurich, Switzerland. “ can obtain a holistic signal, or fingerprint, of the liquid in question.”

A trained machine learning algorithm compares the digital fingerprint to known liquids within the IBM database through the Hypertaste App, identifying which liquids are most chemically similar to the liquid(s) in question.

The portability of this technology could have a global impact on things like liquid quality control, but the potential for sensory profile prediction could soon make waves in consumer markets in the near future by offering chemical ingredient recommendations. For instance, if a food company wanted to introduce a new yogurt to market that was similar to a current product, with a lower calorie count, more specific vitamins, and no gluten, it's conceivable that Hypertaste could be used to predict recipes that meet the required specification. As to actual taste, that measurement would still be left to human taste buds.

To better understand how IBM Hypertaste measures liquids through their portable, AI-assisted “e-tongue,” check out this short video:

Next, there was dramatic progress reported in the prediction of protein shape from genetic sequences. According to Bloomberg news, Alphabet, Google’s parent company, took a giant step forward recently in biochemistry. Reporter Robert Langreth, in his article “AI Drug Hunters Could Give Big Pharma a Run for Its Money,” outlines how, at the CASP13 meeting in Riviera Maya, Mexico, DeepMind, Alphabet’s artificial intelligence branch, beat seasoned biologists at predicting the 3D shapes of proteins, the basic building blocks of disease.

“With limited experience in protein folding — the physical process by which a protein acquires its three-dimensional shape — but armed with the latest neural-network algorithms, DeepMind did more than what 50 top labs from around the world could accomplish,” writes Langreth.

British newspaper The Guardian said DeepMind’s AI could “usher in [a] new era of medical progress,” while conference founder John Moult, a University of Maryland computational biologist, called it “a total surprise.”

Creating accurate 3D protein structures would help scientists find ways for medicines to attack disease. Tools that accurately model protein structures could speed up new drug development, with impacts on the healthcare and pharmaceutical industries.

DeepMind is still far from making new drug discoveries. But many believe it's ahead of any pharmaceutical company in terms of protein folding, with big implications for the big pharma and food processing industries in the not-so-distant future.

For a deeper dive into DeepMind’s impact on protein folding, click here.

Further Progress in Robotic Farming

This week saw additional advances for AI in the farming industry. In Analytics India Magazine, an article titled “Machine Learning Can Now Help Farmers Choose Crops Ripe For Harvesting” describes how a group of engineers from Cambridge University have developed a vegetable-picking robot, called a “vegebot,” capable of delicate lettuce harvests.

“The bot uses a computer vision system to photograph a section of a field of lettuce and then analyses the image to identify which of the lettuces are ripe for the picking, and which are diseased and should be avoided. Once it is done sorting the lettuce, it uses a robotic arm to gently grip the green without crushing and cuts the base to pick it,” writes author Harshajit Sarmah.

Though “vegebot” can perform the lettuce picking well, it’s still quite slow in the picking. In due time, vegebot should get up to speed.

Next, it's become easier for humans to train robots through complex movements, such as those involved in harvesting certain types of vegetables. Consider the RoboRaise, a robot that uses machine-learning to replicate physical movement.

“RoboRaise monitors a person’s muscles using EMG sensors with help from a machine-learning algorithm that maps signals to physical movement,” writes Will Knight in his MIT Technology Review article, “Watch this robot do the Bottle Cap Challenge—and show a new way to control machines.” “The robot then tries to mimic a person’s movements, although a user can also exert some control through careful flexing. It can be applied to any robotic hardware.”

The beauty of RoboRaise seems to be that it’s collaborative and replicable, meaning that the robots will need human assistance to learn, but once a skill is learned, it can be transferred digitally to innumerable other robots.

This technology is still in its beginning stages, but it’s easy to imagine its potential impact on physical labor in the future.

Other news:


That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!

bottom of page