Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!
AigoraCast is available on Apple Podcasts, Stitcher, Google Podcasts, Spotify, PodCast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!
As the Chairman, founder and architect of Compusense, Chris provides a lifetime worth of skill and experience in the sensory and consumer science field. Founding Compusense in 1986, Chris has been an active participant and promoter to the growth of the sensory and consumer science community. From the beginning, he has been committed to “good science” and has always been reluctant to support dubious sensory methods. In the early days of temporal research, Chris and his colleagues pioneered dual-attribute time intensity and also developed the Feedback Calibration Method for training proficient descriptive analysis panelists—a method that continues to save clients valuable time and budget.
As the company continued to rapidly grow, Chris began to teach Sensory Evaluation at the University of Guelph in the Department of Food Science. During this time, his students would complete their labs at Compusense and gain real-world experience working in a sensory lab. Chris continues to hold Graduate Faculty appointments in both Food Science and Maths and Statistics.
While teaching, and to this day, Chris remains active and committed to the organizations in the sensory and consumer science field. In 2011 he received the Inaugural Sensory and Consumer Sciences Achievement award from IFT in recognition of his commitment. He is also active in ASTM E-18 and was a founder of the Society of Sensory Professionals. In 2008, Chris had the honour of chairing the 9th Sensometrics Meeting at Brock University and was also one of chairs for the 9th Pangborn Meeting held in Toronto in 2011.
In 2017, ASTM International Committee E18 presented Chris with the David R. Peryam Award in recognition of his outstanding commitment to the field of sensory science.
Chris enjoys the ability to conduct independent research with talented and generous collaborators from around the globe, continuing to contribute to the ever-expanding community of sensory and consumer science.
Transcript (Semi-automated, forgive typos!)
John: Chris, welcome to the show, and thank you for being here.
Chris: It's my pleasure, John. I'm looking forward to it.
John: Yeah, me too. So, Chris, you helped shepherd sensory science through the computing revolution. What parallels do you see now with the current technological revolution? What is sometimes called the fourth industrial revolution.
Chris: So the questions are really appropriate when, John, because we're going through a fundamental change in the way people think. So back in the dark ages, when I started the challenge that we had at that time was that it was being collected, if it was being collected at all on paper and everyone felt pretty comfortable with paper. What we have to do at that time was make people comfortable enough that what they were doing on computers were, you know, was providing the same information that we're getting on papers. Papers very flexible media. And it allows people to make changes on the fly and try and erase and compensate for things. And the world of computing is a little bit more demanding, although now we have flexibility that didn't exist back then. What it meant was that people had to do a better job of structuring their tests. Now, what I liked about that was that it was an application really of the scientific method that you don't just go and collect a bunch of data. What you do is decide the experiment that you're going to find to run the tests that you're going to run while you're running it and what your outcome is likely to be. So you can at least come up with a hypothesis that you're then testing and then apply appropriate statistics to it. So many of the tests that have been run on paper were being done in simply a tabular form. And a simple count was used to come up with a decision. So statistics were very seldom used and in fact, even today, you see lots of people still wanting to use tables to be able to read off their results on. And of course, we know now that we have the computing power to be able to calculate few lines by precisely.
The question, of course, is why we do the tests in the first place and it's about making a decision. And the challenge that we had 30 years ago was that by doing the work on paper, the whole process was rather time intensive. And it could take easily a week to get a result back. And if you were in the process of developing a product, if you had to wait a week to get the changes that you should be making to your prototype, then the whole process became very slow. So the challenge that I had really was in convincing people that they were going to get the same results that they would have on paper only faster and be able to use some of the advantages of computing that didn't exist with paper.
John: And do you see the same challenge emerging now? I mean, you see immersive testing, right? There's actually, in some sense, the idea that some of the new testing methods of testing in an augmented reality or virtual reality environment, maybe testing with having a survey that's really dynamic, like a chatbot or maybe having Alexa administer a survey, you've got all these new ideas for survey methods. What's your opinion? What's your take your on all these new directions that surveys seem to be going?
Chris: So you're asking me about a parallel between the innovation of computing and then where we are now? I think the biggest parallel is trust is about complement. You know, are we getting something that's this useful or is it just interesting? And there seems to be some pretty shaky ground between actually making measurements on consumers and influencing the way in which consumers behave. So one of the challenges that we had in early application of computing was that management in some companies worried that their employees who were spending time providing these results were actually just playing on the computer and that they weren't actually doing anything of any value. And I think that this is probably one of the challenges that we have right now. So we know the virtual reality is very interesting, augmented reality is terrifically interesting. I mean, there's some performances where they've been enormously well applied and terrifically entertaining. I've also seen the results that Chris Hyman's, for example, has obtained. Yes. And also there's some work on this that they published. And there's no question that you do get a different result of having a virtual reality and augmented reality or or some effective context. And we know how big context is in consumer response. Whether that is actually useful or not. I'm not sure. As far as I can tell right now, the jury is still out on it. Now, maybe there's some really interesting ways of doing market research. Where we can create completely fictional environments and be able to test concepts. But I'm an old fashioned guy. I'm still interested in product. I like to see how consumers respond to real product, because we know that when we ask people conceptual questions, they will give us nice hypothetical answers that might be quite different than the answers that we get when they're confronted with with real products. I think the other thing that you and I talked about in many cases in the past is the complexity now of the consumer market. So the lessons in any product category are vast. And I'm afraid that through the average consumer, this is overwhelming. It's extremely confusing. And I think at the same time, if we look at liking as being the measure of response, it's a it's a very blunt instrument and lots of products can have similar liking. And in fact, the whole concept of category gets challenge around liking context and and the actual timing of any question. I can elaborate on that, but I'm not sure that's really where we need to go. But but I think from the standpoint of challenging this, it is having confidence that the information that we were developing is both reliable, repeatable, and actionable.
John: Yeah. The repeatable piece definitely extremely important in terms of consistency. Consistency, reliability, that you're getting consistent measurements. I guess it's always a question of validity. Right. Is what you're measuring the same as what you think you're measuring? Right. And so I I think that's where some of the immersive testing, at least the idea is it's closer to something. You know, when you put someone in an imaginary bar environment that might be closer to their experience and might be not too far in the future, that we are actually living in a virtual environment, in which case testing in a virtual environment is exactly the perfect thing to do. And you're testing it in the environment. So, yeah, we'll see.
Chris: I think what you're really referring to is it's a very good concept, is ecological validity. So is it something that we're actually likely to do. And by testing within a real environment, then we really know what's going on. So for example, I have enjoyed a number of presentations on NASA's challenge in supplying provisions for the mission to Mars. And there are technological problems that are associated with that. But one of the fundamental problems is absolute boredom over getting the same rotation of new components. And in fact, even on an international space station, they're having the same challenges. And there's great excitement when they get a fresh delivery of pizza because it breaks that particular boredom. So I think that there are some challenges in terms of creating context. So, for example, we know that the U.S. Army loves and needed to done terrific work over the years on meals ready to eat for the military. Because as we know from Napoleon, an army marches on its stomach. And a lot of the aspects of attitude and of morale relate to the food that soldiers are getting to eat. But we know from practice that those well-designed meals ready to eat are traded and broken down and some of them are not eaten at all.
John: Right. The best laid plans.
Chris: Yes, exactly. So I think coming back to the whole idea that an occasion in virtual reality gives us a valid and lasting result is a bit challenging.
John: So what are the steps that those kind of two questions here. What are the big challenges that you see your clients facing right now? Like what do you see that the challenges that they're facing that are different than they were, say, 5 or 10 years ago? And what is Compusense doing kind of in reaction that the two part question?
Chris: Well, I think there's no question, John, that everybody wants to have faster results and they want to get them cheaper. And I have a concern. There's a little Venn diagram that I use of the three overlapping circles of the validity of the costs and speed. And you can have two out of three, but not three out of three. So there's going to be and there has been quite a bit of work done on rapid methods, both in the analytical side and century on the profile and side and also in terms of doing consumer work. And although it's interesting, I think that there's a fair amount of risk associated with it. What we've been doing is research over the last decade or longer to improve the methods. So, for example, as as you as you mention in background, we came up with feedback calibration is a method for training, training panels, train panels. And what we deem is a reduction from having to do training panels in triplicate to doing them in duplicate. And in fact, having very robust results of well-trained, calibrated panels. Panelists just doing a single evaluation. When you have the opportunity to cut the amount of time that you're doing this by to one third. Cut the training down from being something that could be easily two weeks to being two hours. Then you have a tool that's useful because it is reliable and repeatable. It's not just being quick and dirty. It's actually something that you can show robust repeatability with. Now, the other tool we've developed is essentially inform design. And the idea behind those is pretty simple as well. And that is that you can't do a category appraisal of 20 products where every consumer sees every product. Because even if it's feasible to do it in a single setting, there's going to be terrific order effect. There's going to be fatigue. Everything else that fits in with that. And then when you do it over multiple days, then you start doing their thing. So we end up with noisy data being produced in this way. But if we have essentially informed design, then we can actually bring the number of samples, say 10-20 down to four being seen by an individual consumer. But as long as those four are created by design to be able to plop that consumer within the sensory space, then it provides us with segmentation, but also supplies us with good consumer information that we can do that type of testing in a fraction of the time that it would be required to do a conventional study. So, you know, these are the methods that we're looking at to be able to provide high quality data. In a shorter time and much less cost.
John: That's interesting. So your solution to some of these needs has really been to advance the science, the methodology that it isn't necessarily always a technological solution that better science actually can be the correct answer?
Chris: Absolutely. And I believe very strongly in the quality of data. And one of the concerns that I've expressed about the applications of AI is that most of the machine learning models are really devoted to vast quantities of data, millions of data points and most of the applications I've seen are data thirsty. They want more. And where I want to go is actually using fairly parsimonious data sets, but data sets of very good quality. So then we don't have to use brute force to compensate to the noise that we have in the data. We simply reduce a lot of that noise. So I think that there's some great advantages in collecting better quality data. However, the one thing that I would put in here is a caveat. It means that work has to be better planned. And you can't just run a spontaneous "let's do a little taste" test and beg to have robust and repeatable results coming out of it.
John: Yeah, I think that is kind of the agile idea, you know, which I think is coming out of software development, which is in fact a good way to plant software, sometimes become an excuse for people to just do poor quality work quickly. Which gets which gets back to your, you know, your statement that you can't have things that are fast, cheap and high quality. It's just not enough.
Chris: So you remember the joke or maybe you don't remember the joke about, the speed reader who read War and Peace in 15 minutes. And when I was asked how he enjoyed the book, he said, well, you thought it was about a Russia or something. So, you know, we do things at high speed. We're going to sacrifice stuff. And I think that so much of the information that we can get in century relates to our understanding of individual perception and individual response. So we're now gaining a depth understanding of what the genetic implications are. We're also learning a lot more about the relationship with the human microbiome and how adaptive we are as individuals. And, you know, this is something that allows us to understand why some people respond positively to a particular product. And some respond negatively because they are receiving something different.
John: Yes. Right. That was a big takeaway for me from Pangborn this year was just how subjective taste is. How the experience I really do believe the actual experience that some two people go through when they taste something is potentially quite a bit different. And that's one of the challenges I think, for these machine learning models trying to predict sensory profiles from instrumental measurements. Is that actually the targets aren't even the same from person to person.
Chris: So I think it's interesting because so many of the statistical approaches that we've had in the past have been various. So, you know, the analysis of variance of individual attributes has been one of the standards of looking at difference. Now, I'm a great fan of multivariate stats because I think it grows it moves closer to human experience which is multi-sensorial and in complex. And I think most of the interesting stuff in the future is in complexity. I've got great confidence that we're actually going to be able to make that giant leap between taking gas chromatography mass spec data from that using AI and learning complexity. Learning complex individual human response.
John: But as an input to that model, you're going to have to know things about the people that you're producing the results. Right. Yeah, that's the thing that has always been I think the missing ingredient in those kinds of predictions has been understanding what are the differences between people that are driving the differences in the experience and putting those into the model. And I think that's why things like the E-tongue and E-nose have only met with success in limited applications, that there's been probably more failures than success.
Chris: Well, I think I think I would be it would be fair and an honest because I've I've been exposed to this technology for. I mean, one of the confessions that I always make is that my undergraduate degrees in physical chemistry, I started off on pharmaceuticals. So I'm a great fan of instrumental methods and efficient methods gave me the answers that I needed that I would be in. And the the issue around the E-tongue is exactly the same thing as we had around fingerprinting with gas chromatography. That is very good at telling us it is the same, but not very good at telling us it is different. How is it different?
John: All right, that's good part.
Chris: Yeah, so I think but I think we're going to be getting closer to that because if we start to understand in, you know, the strengths and weaknesses, the abilities of our individual assessors, and then we can actually come up with something that's useful. So we eventually looked at data from time intensity to be able to group individual assessors into clusters of similarity. And those clusters are definitely going to be physiologically based. And I think that we have the same opportunity in developing larger databases on a wide ranges of consumers. Linda Bartoshuk was attempting to do that years ago, just over just over Prop sensitivity and the prediction that based upon likes and dislikes of the list of about 30 foods. The problem we have with that is that this is self-administered test. So if I say to you coleslaw, then you're going to imagine the coleslaw that you consume, which could make a creamy coleslaw rather than an oil and vinegar coleslaw. So we could be talking the same words with meaning to things that are entirely different. And there's always the chance that we have. So I I see some interesting work that's taking place right now using odor pans or odor scratch and sniff to be able to provide a standardized set of stimuli for people to respond to. And then we get much better understanding about individual difference.
John: Right. Well, that's right. There's many sources of variability. Right. OK. I had always thought that when you run especially run a category of crazy and you get different liking scores from different people on the products, you get different patterns. I thought, okay, well, the sensory experience was the same, that people just have different preferences. Right? They have some built up desire for one sensory experience. Other beloved is our different sensor experience. But I think the extra wrinkle that really that. I really saw it at Pangborn. But now you're bringing it out again. Is that people are actually different in how they perceive things and what's actually happening that perception is in fact different. Sensory perception is different. Which mirrors the problem that you just alluded to when you say coleslaw. And you think of a coleslaw and I think of a coleslaw, we have different conceptions of coleslaw. But actually, if we taste the same coleslaw, we could even have the same problem again, where what you experience is different from what I experience. It's just the same problem just a different kind of a different level of, I guess the first problem is conceptual and the second problem is kind of physiological.
Chris: So I'm going to take that aside and talk about one of the early pioneers of our field and that's Peter Panther. I miss Peter terribly, I'm so sorry that you died as young as he did. But introduce me to generalize progressive analysis and you know, the concept of a mathematical consensus based on our response of individuals to the same stimulus was what I found fascinating. So we know that individuals are different, but we know that what they're being exposed to the stimulus is the same. So when we give that as a consideration, then we can start to understand how we can group consumers on the back of a combination of their perception and their preference. Because those are going to be quite complementary. There's no question that super tasters exist. So there are people who have, you know, the genetic marker, what a test TADS are. Thirty eight Gene and I have a daughter and I have a wife who are super tasters. And one of the interesting things about that is that they're quite happy consuming broccoli. And they drink coffee because they learned that they didn't have to like it at the perceptual level. They had to like it at the social level. And this is an interesting thing that happens to all of us. We don't consume everything that we like nor like everything that we consume.
John: Right, well. Yeah.
Chris: Well, they are, and I think that, you know, again, the interesting thing that I learned from Peter Panther is that you can you can make pretty interesting maps that could people and product in the same positions and I think the same thing is true with landscape segmentation analysis that you can get a very good idea of where products and populations fit together and this is what gives us the information that we need, the lead that we need to be able to create new products or to, for that matter cold products that are simply duplicating. What a Harry Lewis' favorite stories was his time when he was working with SC Johnson with the glade air freshener product line, that they had over 30 products in that product line. And Harry did the century work on it and found that there were really only seven different aroma categories around that they just had different names and they were able to do a fine job rationalizing because, you know, the contents were, well, I guess evocative, you know, more than anything else.
John: Right. Yeah, it's funny. That brand managers never want to give up.
Chris: Brand managers who are being promoted if they launch two new products. So they moved to that fairly quickly.
John: So, Chris, we're almost out of time, but I do want to ask you about while we're still on the call. How do you see that timing is really coming up? A lot of my client work is the desire for usable databases. Databases that can be accessed for historical norms or for trying to get some new insights or if nothing else, just figure out. Do we already know the answer to the question that we're trying to ask with this new project? What do you see when it comes to databases and how is Compusense? Can you talk about how you're working with some of your clients to help them to manage their databases. It's a hot topic I'd like to hear.
Chris: This is actually a very challenging topic because the question is really the quality of the data. The public sources of data as you well know, are full of noise. There have been some very nice studies of them that have been done that have shown some interesting potential trends. Now, one of the highlights of paying more for me was listening to Michael Circus. And I had a pretty good chat with him towards the end of the conference. And it was really the message, a very clear message that by the time you see the trend, it is already past, your opportunity is likely gone and I think that there is, just a huge challenge in the area of innovation, because if you're the first on the market with a brand new kombucha and that starts to gain traction in the marketplace, then you can do extremely well. If you're a second or third in, then you can probably compensate through some of the shortcomings with that market leader by coming up with products that are of a wider appeal. But if you're the 100th product into that market, I see you going to do.
John: You think so my kombucha is still be different when they've got 100.
Chris: Well, you know, and we know what type of assessments that we've done that we end up with a lot of products that sit on one another's shoulders. And they are so similar that the only real differentiator is brand. And brand is almost a story. And everyone likes a good story. And in fact, that's what we remember more than anything else. I have students that come back to me years later and they say "Dr. Findlay, I took that course that you thought it might be murder". And I don't really remember very much that story that you told us. And that's why that's what gets remembered. I think the consumers brand very much as a shorthand. To give them compliments.
John: Yeah, yeah, and I actually, you know, in that van, maybe did you go to the Scotch tasting workshop with David Thompson.
Chris: I did not but I have spent a lot of time tasting Scotch with David Thompson.
John: Well, it was excellent. I really did get at the idea that, like you said, our brand is story. And I think that really is a way forward for a company is making sure that brand is alive with the sensory experience.
Chris: One of the stories that I love to tell and we're gonna see if we can squeeze it into the minutes that are left. So there was a product that still exists on the market called locanauora, the golden the law. So this was a product that was launched in the 1980's by Seagrams, and it was meant to compete with Drambuie. So touch based honey, sweetened liqueur. The stories is coming from the Golden Law with the heather honey from the hillsides around. And they came up with a beautiful package for the story behind. It was excellent. The problem was that people do not buy a bottle of liqueur, not usually, and consume it all in one sitting. It's open, itt gets served and then it gets sealed. Well, that time you've got air in the bubble. And within a month, flavour reversion took place and the flavor, the next time around was more like paint thinner. So it kills the product right on the right off the bat. They withdraw from the market. They they have, the product still exists and you see it from time to time. But it really destroyed what was potentially a very good product and it was very well researched. So you have a promise that goes along with the brand and product that then is not delivered by the product. You don't have that rooty, then you have a problem.
John: Right. Right. And I think that's where a sensory research can really support businesses by ensuring that we that we fulfill the sensory.
Chris: So you were asking me about databases and the one area of databases that we've we've promoted within our clients and we are actively working on the product database where we look at the sensory properties of the products over time. And with calibrated feedback, calibration calibrated the scripted analysis, you can then look at the trends that are taking place over a longer period of time.
John: Yes, that's an excellent point, because I would say that is deadly problem that I see with my clients is when they try to go to his with their look at their historical data. There's all sorts of panel drafts. They've got all sorts of training issues. You have real data quality issues. And so it gets back to your kind of theme that if you don't have good quality information to begin with, it really doesn't matter how much of it there is, at least within, you know, maybe at some point you can overwhelm it. But we don't we're not going to have millions and billions of data points to the point that we can overwhelm that quality and even selfish biases. It doesn't even matter now.
Chris: Exactly. So going back into the mists of time, when we introduced computerize collection of data, I drew the parallel that time between computerization of accounting systems. That if a company had a good accounting system to begin with and they computerize it, then what they had was a very rapid way and reliable way of getting information that they had a poor system to begin with. And all they did was automated. Then they ended up with automated garbage.
John: Yeah. Bill Gates says that if you if you have a good process, the best thing you can do is automated, and if you have a bad process, the worst thing you can do is automated. Yeah. Maybe we should end on that note. That is really good that no matter how much technology you have, if we don't get a good science, good sensory science underneath it, then all the fancy methods are, too.
Chris: Exactly right. And the last word on artificial intelligence. I believe that the real thing is what's important. So I think we need real human intelligence being applied and the wisdom to be figuring out which is which is right at the end of the day.
John: OK, that's great, Chris. Well, thank you very much for being our first guest ever on AigoraCast. It's really an honor having you here and I hope our listeners well.
Chris: I understand that there are people excited, so they want to hear how come. Yeah. Thank you, John.
John: OK. Thanks a lot, Chris. OK, that's it for this week. Remember to subscribe to AigoraCast to hear more conversations like this one in the future. And if you have any questions about any of the material we discussed or recommendations about who should be on the show in the future, please feel free to contact me on aigora.com or to connect with me through LinkedIn. Thanks for listening. And until next time.
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!