John Ennis
Eye on AI - April 8th, 2022
Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!
We’re covering groundbreaking news in natural language processing (NLP) this week, including Google’s new Pathway Language Model (PaLM), by far the most powerful language model to date, and Open AI’s DALLE-2, which creates remarkably accurate artwork using dots.
Enjoy!
Pathway Language Model Alters Future of Natural Language Processing

We begin with Google’s groundbreaking announcement last week of their new Pathways Language Model (PaLM), a large language model with triple the capacity of its predecessor that more closely resembles human thinking than any language model before.
“In ‘PaLM: Scaling Language Modeling with Pathways’, we introduce the Pathways Language Model (PaLM), a 540-billion parameter, dense decoder-only Transformer model trained with the Pathways system,” reads the Google blog announcement. “... We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases.”
To understand why this announcement is so groundbreaking, it’s important to first understand some basics of natural language processing (NLP) as a whole. At the heart of every NLP application lies what are called language models, which predict the combinations of words and patterns. Language models are what allow chatbots to understand and respond to questions, Alexa to deliver information, autocorrect to dynamically respond to texting errors, etc., and are limited in scale to the number of parameters they’re trained on. In layman’s terms, think of parameters as you would a subject in school: the more subjects a model is trained on, the greater the number of applications it will be prepared for once in the real world; i.e. the greater the number of parameters, the greater the variance in potential applications and the more ‘well rounded,’ or human-like, so to speak, a model is (see tree graph).
Prior to PaLM, Open AI’s CPT-3 model held the record for the highest number of parameters at 175 billion. This allowed it to do various tasks, such as creating essays, writing code, holding a relatively rational conversation, etc. It wasn’t perfect. But it was a leap forward in terms of NLP’s overall capabilities. PaLM’s parameters triple those of CPT-3, meaning it’s 3X as powerful and has even more potential applications. Application examples are limited in the early release. However, those listed are remarkably impressive and demonstrate how quickly the NLP field is advancing and the potential scalability of the Pathways system as a whole.
“Pushing the limits of model scale enables breakthrough few-shot performance of PaLM across a variety of natural language processing, reasoning, and code tasks,” the post continues. “PaLM paves the way for even more capable models by combining the scaling capabilities with novel architectural choices and training schemes…”
AI System Creates Art Realistic from Text, May Upend Art & NFT Worlds

Somewhat coincidentally, OpenAI, the creators of the recently surpassed CPT-3 language model, recently released DALL-E 2, the next iteration of DALL E 1, which creates realistic art from simple commands.
“DALL·E 2 can create original, realistic images and art from a text description… us[ing] a process called ‘diffusion,’ which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image,” reads the DALL·E 2 homepage.
Think up any image you can imagine, chances are DALL·E 2 can create a fairly accurate version of it. Want an elephant doing a cannonball off a diving board? No problem. DALL·E 2 takes that input, references the images it was trained on, then uses diffusion (dots) to create a completely original piece of artwork that matches its best guess of your description. It can take an original art and add subtle variation – Munch’s Screaming Man in San Francisco, for example, or Picasso’s Melting Clocks only cars on the highway instead of clocks in the desert – or simply add a dog or couch to an image you already have.
DALL·E 2 still has the occasional slip up now and again, though these are typically minor from what I’ve seen. Even still, it’s leaps and bounds more impressive than any other art-from-text AI I’ve seen. I’m particularly interested to see what effect this technology will have on NFTs and the art industry as a whole.
Other News
Snoop Dog caps years of metaverse investment by dropping a music video set in his own ‘Snoopverse’
Volkswagen South Africa joins metaverse with NFT treasure hunt game
NFTs, Crypto, Web3, and, of course, food seem to be the main ingredients of the ‘Foodverse’
What will food be like in the metaverse, and how is the industry preparing?
Coca-Cola’s pixel-flavored Zero Sugar ‘Byte’ aims to reach gamers in the metaverse
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!