John Ennis
Eye on AI - May 20th, 2022
Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!
This week, we’ll be looking into potential risks of large-scale AI in agriculture before switching gears to address Google’s claim of reaching human intelligence in its AI.
Enjoy!
AI Does Wonders for Farming, But at What Risk?

To begin, let’s address the new risk analysis report recently released on precision farming. According to the article “Analysts warn growing AI revolution in farming is not without ‘huge risks,’” which outlines the report, researchers warn that using AI for large-scale agriculture comes with inherent risk.
“A new risk analysis… warns that the future use of artificial intelligence in agriculture comes with ‘substantial potential risks’ for farms, farmers and food security that are poorly understood and under-appreciated,” writes Food Ingredients contributor Benjamin Ferrer. “... In their research, the authors have come up with a catalog of risks that must be considered in the responsible development of AI for agriculture – and ways to address them.”
Cyber attacks, which large farming operations that rely on AI are particularly susceptible to, stood out as being the most common among the risks listed. Should hackers target farms to manipulate data sets, harvest production could be affected and lead to massive supply shortages. Additionally, AI systems programmed to deliver in the short term may ignore the environmental consequences in the long term, which could lead to unintended environmental impacts such as erosion, soil depletion, waterway contamination, etc.
To compensate, the authors suggest two solutions: first, to use /white hat hackers,” or hackers paid to identify system vulnerabilities, to help farming operations uncover any security failings during the development phase; and second, to involve applied ecologists in the technology design process to mitigate environmental risks. However, other risks, including the impact of AI on workers and communities, may not have as straightforward of a solution.
“Expert AI farming systems that don’t consider the complexities of labor inputs will ignore, and potentially sustain, the exploitation of disadvantaged communities,” said Dr. Asaf Tzachor of the University of Cambridge’s Centre for the Study of Existential Risk, one of the authors. “... Marginalization, poor internet penetration rates, and the digital divide might prevent smallholders from using advanced technologies, widening the gaps between commercial and subsistence farmers.”
Despite the risks, the benefits of AI-driven agriculture are clear. Let’s hope that those who deploy it are circumspect in their approach to help avoid unintended consequences in the future.
Google Claims AI Is Nearing Human Intelligence, Others Are Skeptical

Switching gears, let’s finish with a look at the reality behind Google’s claim that it has created an AI that matches human-like intelligence. The race to create a human-like AI is a topic we’ve been following closely these past few years, particularly the same-different dilemma and the potentially insurmountable issue of imagination. Last week, DeepMind, a subsidiary of Google, claimed its new multi-modal AI system GATO had reached human intelligence.
“Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said 'the game is over' in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI),” writes Daily Mail contributor Jonathan Chadwick. “AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training.”
Freidas’ claim caused a stir, but it seems to have been premature. Freitas was quick to point out that while GATO can understand human tasks, it won’t be able to pass the Turing Test (meant to measure human intelligence in AI) any time soon. Other researchers aren’t buying the hype. As Tristan Greene of The Next Web put it:
“Gato's ability to perform multiple tasks is more like a video game console that can store 600 different games, than it's like a game you can play 600 different ways,' said The Next Web contributor Tristan Greene. “It's not a general AI, it's a bunch of pre-trained, narrow models bundled neatly.”
Google has a history of being hyperbolic when it comes to making claims about its AI. While GATO is a powerful system, to be sure, it still has a long way to go if it’s ever going to match the complexities of human thought.
Other News
China’s world-first drone carrier ship uses AI for maritime intelligence
Google AI attempted to use AI to write an episode of Starship
Brewery partners with Alberta Machine Intelligence Institute (Amii) to give them an AI boost
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!