top of page
  • Writer's pictureJohn Ennis

Eye on AI - July 17th, 2020

Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!

 

This week, we’ll be discussing the increasing problems of transparency in AI systems: their complexities, how they’re being addressed and potential solutions.


Enjoy!


The Complexities of AI Transparency



To begin, let’s look at the Forbes article titled “How AI Researchers Are Tackling Transparent AI: Interview With Steve Eglash, Stanford University”, which touches on how the problem of AI algorithm transparency began to form.


“One of the often cited challenges with AI is the inability to get well-understood explanations of how the AI systems are making decisions,” writes Forbes contributor Ron Schmelzer. “While this might not be a challenge for machine learning applications such as product recommendations or personalization scenarios, any use of AI in critical applications where decisions need to be understood face transparency and explainability issues…. Because of the size of neural networks, it can be hard to check them for errors as each connection between neurons and their weights adds levels of complexity that makes examination of decisions after-the-fact very difficult”

The problem, continues Schmelzer, is that poorly understood AI systems are increasingly being used in mission-critical ways (i.e. situations where a single error may lead to dire outcomes, such as misidentifying criminals, autonomous driving accidents, etc.). These AI systems, commonly referred to as ‘black boxes,’ may produce accurate results, but the process they use to get there may be inherently flawed, so much so that a newly introduced variable might compromise the entire AI system. The entire method may be flawed. The data may be flawed. Often, there’s no way of knowing until something goes wrong, which is how it was discovered that an AI system meant to predict future criminals was infused with racial bias.


By understanding how these complex AI systems work, we can identify problems and address them accordingly. That, of course, requires transparency. And in most cases, AI systems aren’t becoming more transparent, but less.


“There’s a huge downside to transparency,” adds TNW contributor Tristian Green in his article on the black box problem. “If the world can figure out how your AI works, it can figure out how to make it work without you. The companies making money off of black box AI – especially those like Palantir, Facebook, Amazon, and Google who have managed to entrench biased AI within government systems – don’t want to open the black box anymore than they want their competitors to have access to their research. Transparency is expensive and, often, exposes just how unethical some companies’ use of AI is.”

Forbes contributor Ron Schmelner, in his article ‘Towards a More Transparent AI’ published last month, takes this idea one step further.


“There is no transparency,” writes Schmelnerm. “As a model consumer, you just have the model. Use it or lose it.... As the market shifts from model builders to model consumers, this is increasingly an unacceptable answer. The market needs more visibility and more transparency to be able to trust models that others are building. Should you trust that model that the cloud provider is providing? What about the model embedded in the tool you depend on? What visibility do you have to how the model was put together and how it will be iterated? The answer right now is little to none.”

Is It Possible to Solve the AI Transparency Problem?



Let’s continue by looking at a few ways researchers are addressing issues of AI system transparency, beginning with Reluplex, a researcher-designed program which tracks deep neural networks.


“The technology behind Reluplex allows it to quickly operate across large neural networks,” continues Schmelzer. “Reluplex was used to test an airborne collision detection and avoidance system for autonomous drones. When it was used, the program was able to prove that some parts of the network worked as it should. However, it was also able to find an error with the network that was able to be fixed in the next implementation.”

While I love the idea behind this technology, once again we run into the same issue of transparency. Companies aren’t required to regulate their systems. They don’t have to show their work. And most of them don’t. Even if a company were to use Reluplex to monitor their AI systems, no one’s going to say they have to show their work.


And then there’s this idea of the ‘scaled back’ approach to black box AI systems, meaning unexplainable AI tech will only be used in non-mission critical ways. As Schmelner puts it:


“Since AI systems have not yet proven their explainability and complete trustworthiness, Steve thinks AI will be mostly used in an augmented and assisted manner, rather than fully autonomous. By keeping the human in the loop, we get a better chance to keep an eye when the system is making questionable decisions and exert more control on the final outcome of AI-assisted actions.”

But again, we run into the same transparency problem. There’s nothing to prevent a company from using unexplainable AI. And understanding complex AI takes significant time and resources. Some systems are so complex they can’t yet be explained by anyone. One could argue that, with little regulation, companies are actually incentivized to implement ‘black box’ algorithms in mission critical ways, because if they don’t, someone else likely will and beat them to market.


Of course, the trajectory of AI systems may change tomorrow. Regulations may come. New programs may allow us to more easily catch up with the processes behind our most complex AI systems. Companies may forego profits for transparency, releasing the secrets of their AI systems to the world and making us all the better for it (wouldn’t hold your breath on that last one). The good news: AI advances seem to be slowing as AI systems become more complex, pumping the breaks on the idea that Skynet may be only a few years out. As for pumping the breaks on implementing AI systems we don’t yet understand in mission critical ways, since when has a little lack of understanding ever prevented humans from implementing anything?


Other News



 

That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!


bottom of page