top of page
Aigora means: "Now is the time for market researchers to prepare for the rise of artificial intelligence."

Dave Lundahl - Take Off

  • Writer: Matthew Saweikis
    Matthew Saweikis
  • Jun 10
  • 29 min read

Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!


AigoraCast is available on Apple Podcasts, Spotify, Podcast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!



Dave Lundahl is passionate about applying consumer insights to create a cleaner, healthier and happier world through innovation. In 2003 he founded InsightsNow with a vision to apply behavioral science to advance product innovation. This led to publishing Breakthrough Food Product Innovation through Emotions Research (2011), the Disruptive Innovation Award by NextGen Marketing Research (2017), a US patent to measure implicit reactions (2024) and now being the recipient of the 2025 IFT Lifetime Achievement Award for Sensory and Consumer Science. He gives back as a member of the Advisory Boards for the Dept. of Food Science & Technology and the Marketing Programs in the College of Business at Oregon State University.




AigoraCast is available on Apple Podcasts, Spotify, Podcast Republic, Pandora, and Amazon Music.  Remember to subscribe, and please leave a positive review if you like what you hear!


Transcript (Semi-automated, forgive typos!)


Dr. John Ennis: Welcome to Aigoracast. Conversations with industry experts on how new technologies are impacting sensory and consumer science.


Dr. John Ennis: Okay, welcome back everyone to another episode of Aigoracast. Today I'm very happy to have Dave Lundahl on the show. Dave Lundahl is passionate about applying consumer insights to create a cleaner, healthier, and happier world through innovation. In 2003, he founded InsightsNow with a vision to apply behavioral science to advance product innovation. This led to publishing Breakthrough Food Product Innovation Through Emotions Research, the Disruptive Innovation Award by NextGen Marketing Research, a U.S. patent to measure implicit reactions, and now being the recipient of the 2025 IFT Lifetime Achievement Award for Sensory and Consumer Science. He gives back as a member of the advisory boards for the Department of Food Science and Technology and the marketing programs in the College of Business at Oregon State University. So Dave, welcome to the show.

Dave Lundahl: Hey, thanks, John.

Dr. John Ennis: Yeah.

Dave Lundahl: Great to be here.

Dr. John Ennis: Yeah, it's great. And you know, I had, um, your colleague, Greg, on the show not too long ago, so we've got a kind of one-two punch here. And I think that it's, yeah, he he may have softened the target and maybe now you can finish the job, I guess. I'm not really sure how to end that.

Dave Lundahl: Well, Greg said he was really excited to be on the show, so.

Dr. John Ennis: Yeah, he was really, it was great to talk to him. So, I know that we have a shared interest in AI in AI, and we're both seeing, you know, tremendous changes happening both in society and in our industry. So, maybe we let's start with, um, a little bit because you, you know, you've had really an illustrious career and you've seen the field go through kind of twists and turns. So how do you see the current, uh, new technologies? Let's maybe just just jump into it. How do you see AI impacting our field?

Dave Lundahl: Yeah, I I think there's three things that stand out to me. Number one, is going to change every single aspect of research. Every single aspect. Um, maybe 80% of everything we're doing today is going to be changed by, um, AI. Um, the second thing, it's going to impact our clients and and the client needs and demands that they have. Uh, and third, it's going to impact, um, the way we work as as individuals, you know, in in our the sort of life, uh, work balance that we all have. It it's, I think it's, you can't underestimate that part of how it's going to impact us all that are working, uh, in in some sort of research, whether it's sensory or marketing research or, you know, however you want to call it. But the, it is life-changing for us all. And it's exciting.

Dr. John Ennis: It it is exciting and I I have to say honestly, it's a little bit exhausting. I have been up till 2:00 a.m. every night this week because there's so much going on just, and then I have to get up at 5:00 because I have kids. So like, it is hard to stay up to speed on what's going on. It's so rapid. But, um. So what are the biggest, like right, right now, you're talk, I think what you just said about work-life balance is really interesting. So I'd like to hear your thoughts on that. That, you know, the Industrial Revolution had a huge impact on how people worked and even changed where people lived. And then of course, you know, the invention of, um, indoor lighting during the Second Industrial Revolution, we had electric lighting, that had a big impact on, on just, you had fluorescent lights, people could stay up in the evening. They didn't have to have candles. Um, what do you see now is changing? Like when it comes to work work-life balance. I'm curious why you brought that up as one of the, like, key.

Dave Lundahl: Well, you know, if, oh jeez, just all the things that we're seeing and advancements for these agents now, Mhm. to help us in all the tasks we're doing. Uh, technology is when it's really working, it becomes ubiquitous, right?

Dr. John Ennis: Right, right.

Dave Lundahl: It is something we don't even think about. And right now we're going through this time of change, of course, we're thinking about it. We're being exposed to new things, but I believe here in the next few years, we're going to see these agents impact us in everything we're doing. So, um, does that mean that we're going to have more time on our hands to do other things? Or does that mean we're going to fill up our time doing more things? I think it's more of the latter, but we're going to, we're going through this time of, of change of trying to figure things out that is, um, causing a lot of, of disruption and perhaps angst amongst some people. But I think the, it will impact us in our lives and our work-life balance in much the same way that the pandemic impacted the ability for us to work remotely.

Dr. John Ennis: Mhm. No, I think that's really insightful because I have felt especially over the last month that I've even felt myself changing. You know, I've posted about it on LinkedIn, work talking to agents all day long. I mean, it's changing me personally that, um, I have a tendency to be a workaholic. And so now I'm in the car, I'm talking to an agent, I go home, I'm firing off tasks before I go to bed. I'm always working. And I'm starting to think it's unhealthy, you know? That like, you know, and it used to be there were some breaks on the system. Other people weren't awake, other people were out of the office, but now Gemini is always there, you know, ChatGPT is always there. And I think that is a really profound point, Dave, actually, that I hadn't quite put together. I just felt like something was wrong, but I didn't know what it was. And I think it's that, that the agents are always there. And so that's good and bad. That's, uh, really interesting.

Dave Lundahl: Yeah, how do you turn it off?

Dr. John Ennis: Right. So yeah, you probably can't. Your me time.

Dave Lundahl: Yeah.

Dr. John Ennis: Yeah, that's right. Oh, it's fascinating. Okay, well, that's a good point to start, start the show on. So why don't we take a step back? Because you've had, like, as we mentioned, a long career. I I would guess most of our listeners are familiar with you and your work, but maybe you've got some younger people out there. Can you take us through your little Cook's tour of your career and how you've seen things evolve and then how that gives you the perspective you have right now?

Dave Lundahl: Oh, sure. I mean, I I grew up at a time when, uh, I started working in this industry. I mean, this is going to date me, but where it was large mainframe computers, the personal computer, the PC was just coming out.

Dr. John Ennis: Right.

Dave Lundahl: And I remember programming, uh, an Apple IIe that, uh, to do a one and two-way analysis of variance. I was a statistician at the time. And that was my introduction to, uh, applying sensory and consumer research. That type of, of, uh, very, you know, changing. It changed the world at that point. Uh, and so it progressed from there where it was more and more automation of data. And that's my first business was actually a company called InfoSense. Uh, for those that have been around a while, can remember that perhaps. But that company, um, was all about automating tasks for doing data analysis. It had some intelligence based into it based on, uh, how you interrogate data and then decide what to do with it and to create, you know, easier, you don't need a, you know, a degree PhD in statistics to to do things. But anyways, that I saw that evolve and then, uh, we got into, um, eventually into the 90s where more and more technology and the internet started becoming, uh, ways to move data around, uh, for us to be able to then, um, collaborate in teams more and more. Uh, the the modeling got really incredibly, you know, all the multivariate modeling, uh, that we did in the in the 90s, uh, was really impactful, uh, in terms of understanding insights for complex data. Um, and then as we got into, um, when we found it InsightsNow in 2003, we started understanding something is missing. And what was driving my sort of, uh, things that I was doing and thinking about from a business perspective and from a research perspective is why were people, why were products going on the marketplace and still having really low success rates. Mhm. In spite of all this technology, all this knowledge, all this, these analytics, what was it about? And there's something still missing. And as we got into research, realizing what was missing is understanding, um, human behavior really more fundamentally. We really had good ways to understand human perception, but how does perception, uh, factor into how, um, products and how people engage with, with products and come to know products. How does that impact behaviors? And that got us into, uh, when I wrote my first book was on understanding human emotions more. Uh, that was in 2011. Now, 2011 was the same year that Daniel Kahneman came out with Fast and Slow Thinking, which is also a, a, such an impactful sort of understanding of, of human beings that we have these sort of two modes of thinking, uh, fast and slow. And so that really impacted us further to understand the dynamics of, of this mode of thinking and how that impacts behaviors and how perception comes into the play and emotions and all this. So, so, um, so from that, um, we, we've, we built models around this, ways to really delve into. It's been, uh, quite a bit of a focus for us. Now, today, it's not only having the insights from these models that is really changing the world, but AI is changing the way you synthesize data.

Dr. John Ennis: Right.

Dave Lundahl: So you can have more types of information that you can synthesize and you can train up these agents for a lot of different types of tasks. Mhm. And that is, you know, also now taking our knowledge of behavioral science, knowledge of human perception, and the models that you can create from that to, to predict. And now I can teach and train agents, AI agents, so that you can use them for various different type of tasks. That is the future. That is really changing things dramatically and is going to create a lot more value. Hopefully, in the end, we can be much more, um, industry can be more effective in how they design products, how they market products, you know, so that you can create, um, healthier, happier world, you know, for people, you know. So it's, uh, I think it's about creating better things for people to use rather than more things for people to use. That's at least that is my hope and dream as a researcher, you know.

Dr. John Ennis: Yes, definitely. No, I mean, you see that as a theme with technology that, uh, technology tends to make things that were only available to the very wealthy available to everybody. And, you know, a simple example we already see. I mean, I have a, a Tesla full self-driving car and it doesn't, it's still supervised, but you know, a self-driving car is basically a chauffeur. And that's something that was only available to the wealthy and now it's available to everyone. You know, or or close to it, and it will be available to everyone. And I think when you think about personalized meals and you think about, uh, you know, food that really delights you, and as you're saying, healthier, happier. These are the sorts of things that wealthy people enjoy. They have their own chefs and they have their, you know, chef that looks after the nutrition and here's the meal that you love and it's exactly the way you like it. And I think that experience will start to become more available through the products. The products will really delight people. Um. Now, I think you mentioned making research more impactful. And something that we've talked about, I talked to Greg a little bit about this as well, is the role of simulations. Because there's a lot of talk about synthetic data, digital twins, this kind of thing. What are your thoughts on simulations and their role within a research program? Like, do do you see them as something that is going to provide value in the future? How are you looking at that?

Dave Lundahl: Yeah, um, I view simulating data as just another way of modeling. You know, a simulation is, is, is modeling and creating, um, outcomes. Uh, where I have issues with it as a statistician, putting that hat on, is where you generate data, then you apply normal statistics to that data as if it were a random sample from some population that you want to make inference upon. And that I have issue, but I still, I don't have any issue with the modeling and creating these things, even realizing it's, it is based on some prediction.

Dr. John Ennis: Yes. That's right.

Dave Lundahl: So, yeah, so that's, uh, you know, if you want to generate data in order to help you understand the model outcomes, great. But to use that data in some way to make inference, additional inference about as if it were generated from real people, I haven't seen anything yet that tells me, uh, we're ready to do that. Not, not saying it couldn't happen.

Dr. John Ennis: Yeah. Well, I'm with you actually because, you know, even when you think about what an LLM is doing when it responds, it is simulating what a human would say or do or write, whatever, in this situation. It's trained on all the data. It's basically this, you know, next token prediction model. And, um, I think the knowledge that is inside of a big model like Google, you know, Gemini or ChatGPT or whatever, is not actually the knowledge of the world. It's a simulation of the knowledge of the world. That's the way to think about it, is that it's an approximation that the the knowledge of the world has been distilled down in such a way that when, you know, it's almost like this, uh, you know, it's like it's it's sort of like if you were to dehydrate something, you get a sponge, you rehydrate it, you get something like what you had originally, but you don't actually get back when, you know, it goes back to this full shape, it doesn't, it's not quite the same as it was when it went in. And so I think that you have, you're, that's that is a very good point that you can treat simulated data as useful and you might put it in a research pipeline and you might use it to test ideas, but you should never forget that it's not real data and that at some point you're have to go going to have to go get real data. That it is no. Yeah, and so I think we're on the same page there because I see it as a useful step for screening ideas, especially if you're able to generate lots of simulations. You can maybe you can test thousands of ideas ahead of time and get a read on what's likely to succeed or not with, um, real people. But at some point you're going to have to go out there and test with real people. So maybe that takes us to our next question, which is, what do you think humans bring that is special or different that is going to be, uh, difficult or impossible for machines to replace? I think that's something that is maybe on everyone's mind right now because there's a lot of fear of job loss, there's a lot of fear of, um, yeah, of, you know, where are we going? Are we just going to a world where there'll be nothing left for humans to do or for them to contribute? So what is it, what do you see as like some of the key differences between the humans and the machines?

Dave Lundahl: Yeah, I mean, AI, again, can synthesize massive amount of content that is qualitative in in nature. And, uh, I think the the quant will also soon be fully synthesizable, if that's a word. And to bring that together into some smartness that you can query, that you can prompt and and gain some sort of, um, information from that is more predictive. Um, that doesn't, once you've generated those agents, if you can use that word, that you can then interact with, it still needs a human being to interact with it. Now, Right. There's certainly automation that can be occurred because you can build some sort of software or some sort of a way in which you are automating the engagement with that agent to perform various tasks, and very complex tasks, you know. So the setup of that is still need is going to be needing a human being. Okay? Um, and the, um, interaction with that in in various ways also requires a human being to create the value. All right? And that human in the loop as some people call it, I like, I like that term, um, is not going to go away, but it will allow for more automation of tasks, of creating new tasks towards outcomes that are going to be faster and more efficient to get to some business decision if you want to call it that. Um, I think that or some research insight, you know, uh, if you want to call it that. That that is really what the focus should be is, you know, so I believe if you step back and just think about business where we play and where we create value. Um, there's other ways to create value, uh, but just from a business perspective, today it's all about speed.

Dr. John Ennis: Right. Yeah.

Dave Lundahl: And speed allows businesses to make decisions faster so that the decisions will, um, be shorter in in terms of their impact in their own decision, what, uh, so that it it reduces the uncertainty about that business decision. You know, the more we live in a very uncertain world, especially right now, given a lot of geopolitical things and just the impact of AI creates uncertainty as as as well. Uh, but these two sort of main factors today, and there's other factors as well that that could that are at play. But if you look at the long term, um, it's these uncertainties are what business needs to be able to work within. And the more shorter term, the faster we can speed up the decisions in a in a way that is, you know, um, has some certainty to that decision, the better, uh, as a whole, um, businesses are going to be able to be more effective in and uh, not make mistakes and and in and be able to operate in in this time of certainty and be more profitable and so on and achieve the value in the world that they're trying to achieve. So I think that's a lot of what's going on here. So speed is really important, um, in sort of these tasks that AI can help us achieve with the information that we have out there, which is vast, both quant and qual.

Dr. John Ennis: No, that's very interesting. A bunch of points there. I like talking to you, Dave. You're a very deep guy. You've got a lot of, you've already learned a bunch of things on this call. So, one of them is that I think speed as a, uh, speed is valuable for risk reduction. That the longer you it takes to do something, the greater the risks associated with it. And if you can act quickly, then you have more certainty because the near-term future is more likely to be like the present than the further term future. So that's very interesting.

Dave Lundahl: Very well put.

Dr. John Ennis: That's right. Yep. And it, you know, it's funny, my wife and I are watching that TV show, the Vikings. I don't know if you've seen that show, but we're watching it now. It's okay. It's not as good as I've seen it.

Dave Lundahl: Oh, it's a great show.

Dr. John Ennis: Oh, you liked it? Okay. We're we were, yeah, we'll see how it turns out. It's no Game of Thrones so far, but we'll see. Um, maybe it turns out well. Um, but you know, the Vikings, that what made them successful in a large degree was speed. And when you look at the history of successful militaries, French military, Napoleon was fast, right? Uh, Alexander, definitely fast, you know. Even the Mongols, you know, Genghis Khan, fast, speed. And, um, yeah, it's it's really kind of a deep point. And and AI does, I find that the speed changes the, um, the way you work because it takes things from being, when I think about code I used to write and I would hang on to it because it was valuable and it put work into it. Now my code is largely disposable other than like real projects I'm working on. At this point, I don't even look for apps to do things anymore. I just write them. You know, I need to format some Python code. I'm just going to spin up an app on, you know, AI Studio. I can have an app in three minutes to do whatever I want. And so it's all disposable. And, uh, that does seem to change the nature of work in a fundamental way, where things that were once valuable because they were hard to reproduce become so easy to reproduce that they become disposable. So, I think that that is, uh, that's another downstream effect of speed. But to come back to what you said about meaning, I thought that was also really interesting, meaning and value. That I think you've got, um, activity, which the the bots are great at, agents are great at doing, you know, they can do lots of things. But at the end of the day, I think you need a human to have any kind of value or meaning. I think that if you had a bunch of robots on this planet just going around and all the humans were gone, it would be completely empty. There would be no meaning to it at all. It would just be, yeah, it would be the same as a rock, you know, it would just be like, you know, and I think that it's important we not forget that because I think sensory could go one of two ways right now. Either everything gets automated and businesses feel like, all right, we're doing our sensory stuff and it's all automated. And that would be a huge tragedy. Or they remember that humans provide value. And I think for us as sensory scientists right now, we need to lean into the fact that our field is about understanding the human experience. And that's where the meaning comes from. You know, why, like, you know, why is it that, like, let me just think of a simple example. You know, my children, I was in the car with them and they were, um, drinking some soda and eating some chips and the windows were down and they were really happy. And the products were supporting this whole experience they were having. And there was like, you know, it was meaningful. They're going to have this memory of this nice experience that was created. And I'm sure the children will want to buy those chips again. So, maybe, let's talk about what you see as the future of sensory as, and consumer science. Oh yeah. As, as more and more things get automated, what do you think are going to be the challenges and how can we as sensory consumer scientists position ourselves to lead through this transition?

Dave Lundahl: Yeah. Uh, sensory has been very effective, amazingly effective in changing how products are created, how you design and develop various different consumer products. I'm just stay within consumer products realm. Uh, sensory has other applications beyond that. Just staying within CPG sort of applications. So, and the the field evolved around this idea of understanding liking. Mhm. And as a construct of of what drives people to to do things. And uh, so, so the the industry today, even today, is highly focused on liking. Right. And even, let's look at an example of a functional food, okay? So you create a food product that is has some functionality like it's, it's, it has some nutritional qualities in it that or or other qualities in it that that provides some functionality. So I can become healthier, I'm going to solve some of my health and wellness issues that are concerning to me, or I'm going to be, you know, it's going to address the aspirations I have in life. It's going to make me now, um, have more energy, feel younger, look younger perhaps. Um, I'm going to be more well, whatever those aspirations. So, so, um, uh, the the problem with this, the way that we've thought through and being so focused on liking is that we've created this paradigm where a marketer says we need to create a functional product. So you design the functionality in it. You add things to, let's say the food, using food example, to make it functionally nutritious or whatever. And then you maximize the liking. So if the product doesn't taste very good when you add, let's say, something to it, um, or it's texturally, you know, not very good because it's got a lot of protein in it or whatever, you know, then you work on, um, making it liked, maximizing liking. That paradigm is going to change, has to change now, um, because of AI. And uh, or AI will factor into changing this paradigm. The paradigm of liking, actually many ways we've created foods that are too well liked in some cases. Um, you know, to the point where, uh, we have perhaps even contributed to the obesity epidemic because foods are hyper palatable. And people just like it so much, they can't stop eating it. Okay? So anyway, this construct of liking. So I think the future of sensory and AI is going to help it get there is about understanding, um, perceptions and cues that signal benefits, signal things that are value to people, and building those cues into a product. So, for instance, uh, if you look at, uh, an energy drink, I'm going to use Red Bull as a good example, you know, if you did a blind taste test on Red Bull for liking, it gets a low score. Mhm. Yeah, I've been, I've actually seen some of the early data from Red Bull. It's quite entertaining. Yeah, yeah, horribly, you know, it doesn't really taste good. But that flavor impact, the original, you know, Red Bull product that came out, signaled, uh, that, uh, that product was working. I'm getting my wings. Yeah. Oh, that's a really deep. Okay. All right. So it's example. So, so liking. The paradigm for building cues that signal whatever the effect is, okay? It could be a functional benefit, it could be a social benefit, it could be it's going to change my mood type of benefit, or it could be a sensory benefit. You know, it's like it's authentic now, or now it's it's creating what I really seek out in terms of sensory qualities. You know, nothing to do with liking per se. It has to do with how this product creates value, you know, so towards, uh, my aspirations in life or the things that are concerning I want to avoid. So, I think this is the, it's a much more complex than a nine-point hedonic scale score. Yeah. It has, it is, it's much more complex than a internal or external pref map or a bunch of jar scale scores that tell me what to do and how to maximize liking is like building value into products or designing products so that they create the value and that people recognize it implicitly as well as experience it emotionally. And I think that's the future. Um, and then if I can say it goes beyond that one more step, I think. I think the future of sensory is understanding how product experiences as they're designed impact brand relationships. Because when you have an emotionally impactful experience, it creates memories, implicit memories, and those memories are associative back to the brand. Mhm. And so one of the problems food industry is a good example of this has very low trust today in the in the food systems. People are really, you know, because they're worried about, you know, marketers are just trying to sell me something that I don't really need or claiming something that isn't quite right because they're just trying to get me to buy it. Whereas, and those brands that they're representing, right, are are not trusted as much. So, in order to build trust, we need to understand how experiences and doing saying what you're doing and doing what you're saying from a brand perspective, you know, factors into relationships with the brand. And so, I think AI is going to help us understand that in because that's get us into the whole realm of of social, social media information and how that, um, to understand the language consumers are using around, uh, experiences and how that impacts brands and relationships. So I think, so I think this is the other aspect that we need to go, you know, go beyond liking and and there's a tremendous opportunities for sensory there, and then connect the product and the and experiences back to the brand. I think those, if we could do that, I think sensory will become even more, uh, valuable, um, as a function in in organizations than it is today.

Dr. John Ennis: Mm. All right, there's so much of what you just said. There's lots to unpack here. So one thing I want to highlight on is I think I have noticed consistently with AI, there's this idea of abstraction. It's almost like the the level of complexity that people can deal with has clicked by one notch and that people who used to write software, they can now manage these systems, they can manage a team of agents. So they've gone up one level. People who previously, you know, would be maybe casual users of software, now they can write entry-level software, they can build their own apps, right? Um, things, you know, is things that used to be static are becoming dynamic. You know, instead of like where we're sending out proposals, our proposals are no longer, I'm changing them. They're not going to be PDFs anymore. They're going to be apps. Each proposal will be an app and it'll be interactive and it'll have, you know, information. And that's just that click, right? And I what I see or kind of hear you talking about going beyond liking, it reminds me that, okay, one important evolution in the field was the realization of segmentation and how important it is. That in the early days of sensory, people were just optimizing average liking scores, okay? And that led to a lot of products that were just generally non-offensive. That way you get a high average liking score is that you don't offend anybody. And you get, you know, Wonderbread or you get, you know, light beer or whatever. And that will sell okay, but it doesn't really delight anybody. It just doesn't offend anybody and it gets a high average score. And I do think it was an important insight that there are different segments and that by optimizing, you know, you get a turf analysis or you get some of the other tools out there, some of the preference mapping tools, looking at segments, that was progress. But I think what you're saying now, thinking about going up an order of complexity is not only, because I think people tend to think, well, okay, if it's not liking, what's the other metric? We should have some new metric, right? But what I'm hearing you say is that actually, no, that's not the right way to look at it. For different segments of society, there are different metrics. You know, one group of people are interested in this, you know, this benefit. They want to maximize health. This other group of people wants to maximize something else. That at the end of the day, you know, we're trying to maximize maybe purchases, but that, you know, that that's downstream from all the things we can control. So that's hard to optimize. So we have to find proxy measures. And those proxy measures can really depend on who it is that you're trying to sell to. You know, even within a product category, there can be different reasons why people buy it and you have to disassemble that. But that was too hard until now. AI is, by its nature, through the synthesis or the deconstruction, you know, the opposite action, is allow allows us to look at complexity that previously was too hard to deal with. Now we can deal with it and we can start to detangle things that previously we couldn't detangle. Right. So I think this idea that you might have different metrics that you optimize for even within the same product category because you're trying to reach different groups who are themselves trying to optimize different metrics. Correct. That's that seems to me to be a a, you know, a jump just like going from average liking to, you know, thinking about now we go one level beyond. We have different metrics and it and and that's fascinating. Um,

Dave Lundahl: Yep. I it's it's so true. Um, I mean just the impact of context on human perception and behaviors, behavior tendencies even, is is huge and you know, to think that we in the past, we were looking at liking in a controlled, um, um, place where lights and everything are masked and so on. Uh, so we can understand the true sensory qualities of this experience of that we roll up into a liking metric. And um, that doesn't really, um, always, sometimes it it's relevant to to serving a purpose, uh, but often it is not reality and how people experience things. And the predictions from that type of research are inaccurate. And so, uh, so anyway, it's it's, uh, you know, packaging research, good example. Um, we did some work, uh, um, a couple of years ago on, um, looking at package design for a company that was natural products. And so it was positioned, uh, it was a, it was a plant-based meat product, positioned either next to meat in in a shelf open shelf setting, or in in a in the cooler next to where all the natural products are. Right. And, uh, we found that the the same three three different package designs versus the current had very different perceptions in those two areas because the context was different and, uh, also people were shopping differently, you know, and looking for had different expectations and so on. Yeah. And so to if you want to have a product that will serve both purposes, you need to figure out how to design it. You know, in one case, people were, uh, something that had more, um, packaging, more plastic was very offensive. And in the other case, more next to the meat products, it was okay. Yeah. And just those little sort of insights, uh, that's an example of understanding things on a little more depth. Uh, now how AI work could have worked in that situation, um, we didn't apply AI on that research. We applied some other modeling to understand it. But the point is is that the complexity is there to your, to what you're saying is that we need to, uh, we now we have the opportunity with AI to train AI so it learns and can be applied in creating these other agents to help us in so many different ways. So I think it's, it's exciting times.

Dr. John Ennis: Yeah, that's right. And we are going to wrap this up in a second. But the, um, context. It's very interesting to hear you mentioned context because of course, you know, that's, you think about the nature of how Transformers make their predictions, these large language models, right? The context is exactly what they're making their predictions based on. And so there does seem to be a, you know, if you think of the language model as a simulator or the transformer as a simulator, that the context is kind of the basis for that simulation. And so there is a natural, maybe that's part of why potentially these tools are so useful for us is because they naturally do, uh, they even they want to know the context. The better your prompt is, the better your outcome is going to be because that's what you're talking, I mean you're really talking about prompting there in the store when you put your, you know, your product here, you put your product there. Those are different prompts to the consumer. Um.

Dave Lundahl: Right.

Dr. John Ennis: So there's a nice match up there. Anyway, Dave, this is a pleasure talking to you. It's a shame the show has to, we normally try for half an hour, so we've got eight minutes over. So, but one thing we always like to ask, especially from people like you who have such deep experience, um, is advice for young people that right now it is, I think we're going through something like the Industrial Revolution, which happened over decades, except now it's happening in about a six-month period and we're, it's happening. This past week was an insane week in the AI world. I mean, it's just totally a crazy week. Um, and it's probably just going to get, probably going to get more crazy. So we will see what happens. So, I think no one really knows the answers to these questions, but what are your thoughts on what should young people be doing right now to help them set themselves up for success in, you know, as this transition.

Dave Lundahl: Yeah, I I think the roles for industry, uh, are changing and the needs for certain roles are changing. I I just talked with someone who's fairly young in their career the other day and, um, they're working as a data scientist for a company. They have a PhD in chemical engineering. Mhm. I was like, whoa, what, you know, what gives? Why are you working as a data scientist? Because the need is there and they're highly valued in this organization in order to to do the things. So they're doing all the work, they're creating these agents, they're doing the prompting as a data scientist, they're pulling in that information so that, uh, you can train up these agents and so on. That's their role and it is a really, really critical role. Mhm. Does that mean that a sensory professional, um, you know, is going to change, you know, because sensory is still has this deep understanding of human perception. Right. Um, I think, I think it's an and and. The more sensory professionals can understand, uh, the power of AI and how it's going to change what they're doing, the better, you know. Um, you know, how do you collect and engage, collect data, how do you engage with people? AI impacts that. How do you, um, uh, not only engage with people in great ways, but then how once you collect data, how do you clean it? Mhm. You know, and and then and what are the different ways you can analyze it so that you can apply it down the road future, uh, for other insights in the future. Um, that's a really different way of thinking. And I think, uh, but still the understanding, the fundamentals of sensory are still there. Still need to be, you need to make sure you're doing research where where people are, you know, are responding in ways that are not biased. You know, not creating, um, other issues. Um, and and that, uh, you know, I think that's fundamental, but then to, so I think it's an and and. I think the future is or people that maybe it'll drive different people into into the sensory profession, I don't know. Or maybe it's people that are more, they're PhD in in in chemical engineering and become a data scientist, now learn sensory. I don't know. But, uh, I I think the the knowledge and how does that change our universities and how we educate.

Dr. John Ennis: Education, yeah.

Dave Lundahl: Education, uh, is completely changing, you know. And, uh, so, uh, I I would say, um, for young people in the industry, uh, embrace it and learn as much as you can. It will make your career just take off.

Dr. John Ennis: No, I totally agree with that. Yeah, use it for everything you can. That's what I would be last week for advice. Just use it all the time, you know. Yeah, but it is such a challenge. Yeah, okay, well you I'll let you, you're the, you're the, you're the guest and you had a nice ending. So I think we the take, the takeoff. Maybe that's a good title for the show.

Dave Lundahl: All right. John, I really enjoyed the exchange. It's, it's, uh, you know, it's engaging and, um, I've learned a lot just from your, you're really, you're much better at being succinct in summarizing some of my, you know, um, concepts that I took me longer to say.

Dr. John Ennis: Well, it's always easier to edit than it is to come up with the first draft. So, um, yeah. All right. So Dave, how can people get in touch with you? What's the best way for them to, um, connect with you, you know, maybe they want to, um, work with InsightsNow or

Dave Lundahl: Yeah, um, Dave.Lundahl@InsightsNow.com. You can, you can, you can reach out to me that way. You can reach out to me on LinkedIn. Um, I'm, I'm, I watch that, uh, fairly often. And, uh, if you know someone else at InsightsNow, contact them and we'd be glad to if that's a, you know, pull me into a conversation. Um, I, you know, I love to hear what people are thinking and what people's concerns are and and what they're trying to do. So, um, yeah, I just, uh, open door. I'm, uh, very much excited. We're at a conference, um, IFT this year. Uh, I'll be there. Um, and, uh, uh, Quirks, um, in New York. I'll be there. I'll be at Pangborn coming up.

Dr. John Ennis: And I'll be there too.

Dave Lundahl: So, um, I'm looking forward to, uh, all those sort of conferences that are coming up this summer.

Dr. John Ennis: Yeah, sounds great. All right, Dave, well, thanks a lot.


Dr. John Ennis: Okay, that's it. I hope you enjoyed this conversation. If you did, please help us grow our audience by telling a friend about Aigoracast and leaving us a positive review on iTunes. And if you'd like to learn more about Aigora, please visit us at www.aigora.com. Thanks.




That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!


Join our email list!


 
 
 

Comments


bottom of page