top of page
  • Writer's pictureDanielle van Hout

Eye on AI - November 18th, 2022

Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!


This week, we’ll be looking into new challenges universities are facing with students' use of language AI before switching gears to address how retailers are using AI to prevent theft.


Universities Facing New Cheating Challenges with Advances in Language AI

Let’s begin by unpacking the article “‘Full-on robot writing’: the artificial challenge facing universities,” which describes how the rise of language AI is opening new challenges on how universities address the potential for student abuse.

“Two years ago, computer scientist Nassim Dehouche published a piece demonstrating that GPT-3 could produce credible academic writing undetectable by the usual anti-plagiarism software,” writes Guardian contributor Jeff Sparrow. “… He now thinks we’re already well past the time when students could generate entire essays (and other forms of writing) using algorithmic methods.”

Powerful language models such as GPT-3 give students the ability to produce writing that is human-like on demand, and for free. While this wouldn’t constitute plagiarism, it is, in essence, a form of cheating because the work is not their own. Yet this issue isn’t readily addressed by most universities, but falls into a sort of gray area: work not produced by the students themselves, but that’s completely original, generated by the students using an AI system.

“In news and opinion articles, GPT-3 has convincingly written on whether it poses a threat to humanity (it says it doesn’t), and about animal cruelty in the styles of both Bob Dylan and William Shakespeare,” continues Sparrow. “A 2021 Forbes article about AI essay writing culminated in a dramatic mic-drop: ‘this post about using an AI to write essays in school,’ it explained, ‘was written using an artificial intelligence content writing tool.’”

Most researchers believe that this type of AI is detrimental to the development of critical thinking. However, there are dissenters. Scott Graham, for example, in his opinion piece for Inside Higher Education, encouraged students to use AI technology, claiming that GPT-3 (the most powerful language AI) could produce only the minimum requirements for an assignment, and that weaker students would struggle using it, since giving the system effective prompts (and then editing its output) required higher-level writing skills. Another theory, offered by Deakin University’s Prof Phillip Dawson, who specializes in digital assessment security, is that AI allows for what he calls “cognitive offloading.” In other words, AI can be used by students as a tool to reduce the mental burden of a task by cognitively offloading information in list form. He doesn’t go so far as recommending students use AI to write entire essays, but instead encourages universities to set strict parameters on what kind of cognitive offloading is permitted for each assignment.

Sounds complicated, no? It is. Even with parameters in place, there’s no way for universities to enforce them. And by eliminating the use of AI altogether, universities may be putting their students at a disadvantage. Most advanced professions use AI in one form or another. It’s already an essential tool of journalism. If it’s accepted in the real world, why should it be prohibited in universities? Then again, if it is allowed, how much should be allowed, and how can it be regulated? The answers, it seems, aren’t as black and white as they appear, and may require a more philosophical approach to understand.

“To put the argument another way, AI raises issues for the education sector that extend beyond whatever immediate measures might be taken to govern student use of such systems,” continues Sparrow. “One could, for instance, imagine the technology facilitating a ‘boring dystopia’, further degrading those aspects of the university already most eroded by corporate imperatives… But maybe, just maybe, the challenge of AI might encourage something else. Perhaps it might foster a conversation about what education is and, most importantly, what we want it to be.”

Can AI Be Used to Stop Retail Theft?

While AI is exposing challenges in education, it’s solving others in retail. According to the article “How artificial intelligence is being used to stop retail theft,” brands are increasingly turning to AI to combat increasing crime rates.

“According to the National Retail Federation's 2022 National Retail Security Survey, retailers reported losing almost $100 billion in products to retail crime,” writes News Channel 5 contributor Chris Stewart. “... More than half of stores are increasing what they spend on security and loss prevention, according to the National Retail Federation.”

Lunardi’s, a Bay Area grocery store, implemented the Veesion camera system, which has become popular among retailers. The camera utilizes AI to detect when a shopper’s bag or appearance changes, suggesting something new has been added. According to the head of Lunardi’s security, the Veesion camera system catches, on average, one shoplifter per day. Crime in retail hurts the brands and the customers. When too many products are stolen, brands increase prices to recoup losses or to help pay for security. Additionally, police don’t typically treat retail crime as seriously as they do traditional crimes. This forces stores to come up with better forms of theft prevention.

The National Retail Federation is pushing Congress to pass two bills by the end of 2022, including The Combating Organized Retail Crime Act and The INFORM Consumers Act to help with data sharing on theft and 3rd party seller verification. These are steps in the right direction. Too much in-store security tends to feel intrusive. And this isn’t a one solution fix-all issue.

Other News


That's it for now. If you'd like to receive email updates from Aigora, click on the button below to join our email list. Thanks for stopping by!

bottom of page