top of page
  • Writer's pictureJohn Ennis

Eye on AI - July 1st, 2022

Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!


We’re all about AI trust this week as we delve into taxonomy, a new method of building explainability into AI models, and whether or not we should feel comfortable trusting our infrastructure to the management of machines.


Building Explainability into AI May Help with Future AI Trust and Adoption

It’s no secret that AI has a black box problem, the ‘black box’ referring to issues of explainability inherent in many of our complex AI models (for most complex AI models, we don’t understand how or why decisions are being made). It is this black box problem that looms large over AI adaptability, as humans intrinsically don’t trust things they can’t understand.

“Data scientists typically select and handcraft features for the model, and they mainly focus on ensuring features are developed to improve model accuracy, not on whether a decision-maker can understand them,” writes TechExplore contributor Adam Zewe. “... domain experts, most of whom lack machine-learning knowledge, often don't trust models because they don't understand the features that influence predictions.”

According to the article “Building explainability into the components of machine-learning models,” MIT researchers have been working hard to address this problem by partnering with decision-makers and domain experts over the past several years to study ways to address machine-learning usability and explainability challenges. This led to the MIT team building something called a taxonomy, which prioritizes explainability in each model’s creation.

“To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning model's prediction,” continues Zewe. “They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.”

At the heart of the taxonomy is the idea that one size doesn't fit all. MIT researchers identified explainability needs for varying groups of people through a number of projects, then defined which properties were most important to each group. The taxonomy asks which level of interpretability is needed, then shares how the model works based on that need. The hope is that this work will lead to a system of more simplified, humanistic explanations to AI models, putting explainability and trust at the forefront of AI’s creation.

Can We Trust AI with Critical Infrastructure?

Building on the theme of trust, the recent Forbes article titled “Can We Trust Critical Infrastructure To Artificial Intelligence?” questions whether it’s safe to turn over our sensitive infrastructure grids to the management of AI.

According to the article, AI is set to play a fundamental role across all our core infrastructures, with the potential to vastly improve infrastructure systems by identifying patterns we can’t see as humans. In power, utilities, and energy alone, AI is able to examine massive amounts of data across power plants and accurately forecast when surplus energy is available to supply and charge batteries or vice versa, identify new methods of sustainable energy consumption and CO2 reduction, and suggest new methods of reducing energy costs. These kinds of life-altering impacts could be seen across every infrastructure with the help of AI. However, many still wonder whether it’s wise to leave our most critical infrastructures to the decisions of machines, especially to those we don’t understand.

To address this issue, the author, much like Zewe in the previous article, believes the solution lies in refocusing on AI’s explainability.

“[Explainability] is fundamental for describing corrective recommendations in a human-readable way with clear evidence that mitigates uncertainty and risk,” writes Forbes contributor AJ Abdallat. “... AI solutions' usefulness may be measured by human-usability with their definitive worth equating to their ability to provide humans with usable intelligence so they can make quicker, more precise decisions and develop confidence.”

Trust or no, all signs point to AI’s rapid advancement. While AI has immense potential for goodness, it has equal potential for misuse or abuse. Because of this, I believe there must be regulation around the idea of AI explainability, particularly for AI systems that will be running critical grids we so heavily depend on. Time will tell whether such regulations are ever adopted.

Other News


That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!

bottom of page