top of page
  • Writer's pictureJohn Ennis

Eye on AI - August 7th, 2020

Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!

 

This week, we’ll be looking at the darker side of AI: its business limitations, potential for abuse, and the most dangerous crimes it might be used for in the future.


Enjoy!


How AI Might Potentially Fail Your Business



To begin, let’s take a look at a recent article out of Forbes, titled “When Does Artificial Intelligence Not Work In Business?”, in which contributor Josh Coyne addresses that very question by touching on the three main areas AI might fail you business: capabilities vs. expectations, financial impact, and amorality.


Capabilities vs expectations is an issue with any technology we don’t fully understand. Business leaders are sometimes conditioned to look at AI as that shiny ‘fix all’ tech, and for good reason: it does fix many things for many businesses. Yet where the light shines on the benefits of AI, misconceptions about cost vs. benefit may lurk in the shadows––just because AI benefits one company doesn’t mean it will benefit yours equally.


Think of it this way: if your business hopes to save on energy costs, solar energy might seem like the rational move. It’s great for branding. It saved 70% of energy costs for one of your clients. Why shouldn’t it do the same for you? But what if your business is located in a cloudy or sun-lacking area? What if you need to redirect sunlight to hit your solar panels, or implement costly technology to retain more sunlight? What if even after all that, you’re only getting a fraction of the savings your competitors are getting for the same or even more advanced technology? Would those solar panels still be worth it? Probably not. The same idea is true of AI –– and what sun is to solar, data is to machine learning.


“Developing ML models requires collecting, storing, transforming, labeling, and serving data in production,” writes Forbes contributor Josh Coyne. “These processes can quickly add up in cost, leading to meaningful gross margin compression. Even training a single model can cost hundreds of thousands of dollars in computational overhead. Additional COGS stem from human-in-the-loop structures (annotation and human failover), model inference in production, and decreasing marginal returns to training models.”

The last issue, amorality, is perhaps most important. It deals with the ethical viability of AI models. As mentioned before, models rely mainly on data. Researchers are responsible for feeding data into models, then training them on that data to inform decision making. And faults within a model’s or data algorithm can be addressed and corrected in models where the researchers accurately tracked the evolution of the decision making process. But when models become too complex or aren’t accurately mapped, they may become self-contained ‘black boxes’ that evolve in a closed system even the developing researchers don’t understand.


“The desire to create the best performing AI models has made many organizations prioritize complexity over explainability and trust, opening the door to potential biases,” continues Coyne. “An example is the Gender Shades study in 2018. In the study, it was revealed that facial recognition services by Microsoft and IBM performed better on men than women. As the world becomes more dependent on algorithms for decision-making, it’s critical that explainability becomes a core component of ML models.”

Supplement this article with “Examples of Failure in Artificial Intelligence” to review six of the most news-worthy AI failures over the past few years.


Researchers Rank 20 Most Dangerous AI Crimes of the Future



We conclude with a look into the future of what one might call ‘evil AI’ by looking at the twenty most dangerous crimes AI will potentially create in the next fifteen years, as ranked by a team of researchers. The list, which was based on academic papers, news, popular culture, and commentary by notable AI experts, was compiled by scientists at the University of London, and ranged from driverless explosive deliveries to multiplication of AI systems used for key applications like public safety or financial transactions – and to the many opportunities for attack they represent. Once compiled, the list was given to AI researchers to rank in order of concern based on four criteria: the harm it could cause, the potential for criminal profit or gain, how easy the crime could be carried out and how difficult it would be to stop. The number one concern, which you’ve likely seen on the news lately: DeepFakes.


Additional applications receiving the ‘highly worrying’ designation include AI authoring undetectable fakenews, AI-infused phishing attacks (perpetrated via crafty messages impossible to distinguish from reality), and large-scale blackmail, enabled by AI's potential to harvest large personal datasets and other information from social media and other easily penetrated accounts.


As the saying goes, you can’t have the sweet without the sour. And for all the good AI does and will continue to do, there are people out there who will try and use it for harm. The better we understand how it can be used for good and ill, the better we can prepare for potential evil AI attacks (I’m imagining Sarah Connor reading this in the future nodding emphatically).


Other News



 

That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!


bottom of page