top of page
Aigora means: "Now is the time for market researchers to prepare for the rise of artificial intelligence."

Dal Perio - What is your Objective?

  • Writer: John Ennis
    John Ennis
  • Apr 9
  • 31 min read

Updated: Apr 14


Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!


AigoraCast is available on Apple Podcasts, Spotify, Podcast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!



Dal Perio is a Senior Manager of Sensory & Product Insights at Starbucks, with 30 years of experience in Sensory Science, Consumer Research, and Marketing Research across seven Fortune 500 companies including Johnson & Johnson, Diageo, and Unilever.


His expertise spans Product Innovation, Consumer Research, Quality Assurance, and Product Testing. At Starbucks, he focuses on Sensory & Product Insights for various channels, ensuring optimal research solutions. He's actively involved in numerous professional sensory organizations and was mentored by Rose Marie Pangborn.


To be put in touch with Dal, please contact Aigora.




Transcript (Semi-automated, forgive typos!)



John: Dal, welcome back.


Dal: Thank you. And you know what, thanks for having me as possibly your first repeat guest. I'm actually honored.


John:  I think so. I'd have to go to the archives, but I think you are the first repeat. Yeah.

Dal:  Either way, being a repeat, I think is an honor.

John:  Yeah, it's great. All right, well, let's start with that Rose Marie Pangborn thread because after you were on the show the first time, I did receive several nice messages from people who were grateful to hear something about her and they felt like they had a better connection to the past and to the, you know, history and culture of sensory. So maybe you could just start by talking a little bit about Rosemarie, you know, how she influenced you and what she was like as a person, that kind of thing.


Dal:   Yeah, I don't think many people know this except maybe a couple of people, probably Joel Seidel and her Stone to be specific. But when I was in my at the end of my junior year and had spent a year working with Rose Marie Pangborn in her lab and helping her conduct research with students as panelists. Joel Seidel actually visited Rosemarie at UC Davis and asked her if she knew of anyone who could possibly. He could possibly hire because he had an opening. And she said, yes, I know the. I know the. I know the person, you know. And so she actually called me to come in and see her in the food science lab. And so when I did go down there, Joel Seidel was there. And of course, being a student, I was in shorts and sandals and a T-shirt and not really in any way thinking that I was going to be interviewed by Joel Seidel, who was wearing a tie, right? A suit and tie. And so we had this conversation, he and I, and. And I think it was mostly because of her. Her recommendation more than anything else that it didn't really matter what I was wearing, what I looked like, or. Or anything else. It was. It was. It's just that how much. That's how much influence she had on the sensory community at the time. And so he hired me. And so it was nice going into my senior year knowing that I had a job already set up. So it took a lot of stress off. And I really. My senior year was really. I just had a quarter left to go. And so having that burden taken off of me during the time that it was my senior year, I could be a lot more adventurous, a lot more influential, and also a lot more. I could absorb a lot more without the stress of trying to find a job. You know how it is when. When you're. When you're a senior in college and you just have that. That stress of having to find a job because, you know you're going to be out of college or you're going to be the university. So a lot of that. That stress was taken away. It helped me to focus more on what I was doing, what I was studying, and basically everything that she had to teach me. And so I think that's a really good indication of. She was really immersed in the sensory community, but she really felt deeply about the people who were her students. She wanted to help them, she wanted them to learn, and she wanted to teach them as much as she could. And I think that that's really a really good indication of how compassionate she was and how much she really did care about the sensory community. She was doing it for. For the love of sensory. She wasn't doing it for. For any other. Any other real Reason other than she really liked what she did. And sometimes people forget that because she does have a symposium named after her. And sometimes people forget that she really is a. She was just a person. She was like a real person.

John:  Yeah, no, it's really cool. And I've always found this in sensory that people are very warm and welcoming, that it is a friendly field, as fields go. Yeah. You know, because I've been to. I've been to all sorts of conferences. You know, I've been in Web3 and AI and that's a totally different world. But I do think Sensory has a really good heart and it's interesting to trace that back to Rose Marie. Maybe it's her heart that's at the heart of Sensory. Yeah, it's good. All right, so then. So you've been then in Sensory your entire career, then you went from Trigon and then you've been on this journey. So I think you're a great person to ask about the trends that we see now because now we have really the rise. I mean, AI has been here in various forms at Agora, we've been doing dashboards, machine learning models, that kind of thing, knowledge management. But now finally you have LLMs and they open up all sorts of new possibilities. So how do you see things changing both when you look around the field and in your own experience? How are these new models and these new ways of working affecting Sensory?


Dal:  Yeah, what, what I'm seeing a lot more of, and I, I didn't really see this initially in, in. At the start of my career, a lot of companies kept Sensory in different methodologies for Sensory in different In. In silos. So basically there was a. There was an area in, like your larger companies, like companies that could afford different, um, um, parts of Sensory or different. Or enough of a staff in Sensory to be able to silo the people within it, within that Sensory division or the sensory department. Meaning that there was a team that worked on difference testing, there was a team that worked on descriptive testing, there was a team that worked on consumer testing or clts. So a lot of that research was siloed and it wasn't always. There wasn't always communication between or among those different silos that existed. So those that did consumer testing, they just knew consumer testing. They didn't know that how impactful it could be with these other methodologies to basically either reinforce or maybe maybe point them in a different direction as far as whatever their objective was for the research that they were doing. And I think it might have made it a little Bit more difficult for product developers too, because then they'd have to talk to those people if they want to know, hey, there's a difference between these two products. Okay, these are different. Okay, now let's, now I'll go talk to these other people who do consumer testing. Hey, we found out that they're different. So let's talk about what the consumer test will look like. What I'm finding more now is that there's a, more of a combining of methodologies. And I'm using that elementary or that kind of archaic way of testing back maybe 30, 25 years ago to kind of fast forward to what I see is happening now. But because of those differences that were seen as, because of the different methodologies that were being seen, people started to realize how powerful it was. Hey, we can combine all of this into, into a project. Not meaning that you would combine all the research or the other methodologies, but you would be able to inform what your next step should be. So there was more of a timeline of, hey, let's see if these samples are different, if they are different, why they're, why are they different? Maybe that's where the descriptive analysis piece would come in. And again, I'm just kind of like throwing these, these out. It doesn't necessarily mean that's the actual trajectory of what, what product development would be like, but ultimately getting to the voice of the consumer. But what I'm seeing now is that, I'm seeing that, that within the, the, the sensory research, there's, there's the quantitative piece, there is the qualitative piece, there's again the descriptive piece and the emotions piece. And then also we also use online community too. And so we can get all, we can use all of these different methodologies to, to talk through. Hey, this is what we're learning. And it isn't, it isn't just a clt any longer that, that you're going to hang your hat on as far as making a decision for a company to make a decision. But there's all of these different parts of research that can help to inform what your product will be like or what your product should be like or what your product will, or how your product will do once you do launch it. So it takes a lot of the risk out of it by using all of these different methodologies and combining them. And like I said, I think this makes much more powerful recommendation is that as a sensory person you can make.


John:  Yeah, something AI is great at is information synthesis for sure. You have all these different information sources and you have patterns running through these different sources. And I do see. Well, LLMs obviously are, they're large language models, you know, for the listeners who. I try to avoid lingo, you know, I don't want people listening to the podcast and they're like, but LLM, I think most people know that at this point are great at first off, translation that if you have one, you're trying to get from one language to another, whether it's computer programming language or you know, a spoken language, you're great at that. And actually something that's very promising is using LLMs to bridge quantitative and qualitative data where you want to try to predict different ratings, maybe even have incomplete data. You've got historical data where you have some product tests that were conducted one way, others were conducted a different way. You want to try to harmonize them. If you have some examples, if you have overlapping data, then you can definitely start to bridge that gap. But even still trying to match up data from one type with another type, LLMs are great. They're trained with all these intrinsic patterns and they give us that opportunity to aggregate information. Now I know you're really interested in the kind of information or, sorry, the emotions. Research in the spirit of getting information from consumers, how do you see that playing into this whole like the rise of LLMs.


Dal:  Yeah, I think that emotions, the way that they're currently collected is almost like a, is almost open ended. So I think what happens is that you have, let's say if you have 150 consumers, right, and you have them talk through how did this product that you just tasted make you feel? You're going to get 150 different types of, of, of responses, maybe many, in many different ways. And to be able to try to piece together what are the semantics, what are phrases that are coming up as far as the way that it made them feel. And first of all, I think emotions have been really helpful for me in making decisions or making recommendations. But I think that being able to pull together what the common language is for an emotion or what the semantics are for a specific emotion can be time consuming, can be sometimes subjective. You know, if the person who's going through and looking at hey, if they're, if there's someone saying hey made me feel this way and someone else is saying hey, made me feel this way, but they're saying the same thing. But if you have a person that's going through and looking at, literally looking at these open ended questions, it isn't an easy way to be able to condense it and say oh yeah, this is, this emotion is connected with this product that was tasted. I think it can be a lot more efficient with, with chatbots, I think chatbots. I think that to generate some type of machine learning and of course there's going to be continue to use the emotions piece for a lot of the CLTs that you do is going to be really important because that machine learning piece would then start to become important. It'll start to become easier to figure out what those emotions are. But sometimes I think to that, and this is kind of a little bit of a tangent off of your question is that I think that, that emotions can be influential in how you advertise or how you position that product. Meaning that what time of day. So you're getting an emotion that is more connected to the morning. Then if you're thinking hey, this product is something that could be an afternoon snack, maybe you can rethink how you're going to launch that product because people are saying hey, it made me feel this way. And if it makes you feel this way and this is the way that you feel in the morning, then then you can, you can switch over. So again, just a tangent to what you said, but I think that, that, that filtering or, or connecting or merging of these open ended questions and again I only like to use open ended questions just for emotions. I don't like to use it for like, for quantitative because of exactly the same reason that I was saying is that sometimes it's just, it's just, it's just a less of a return on investment to have open endpoint. But I think in the form of emotions you do have to have that because people are now talking about how they feel. They're not talking about how much they like a product. So it's kind of a different, different way of, of of using open ended questions. And I do like the idea of people talking through how they feel. And sometimes they, they have a lot of feeling about a product.

John:  Yeah.


Dal:  Whether it's positive or negative. They got a lot of feelings out there.


John:  No, that's right. Actually, you know what's interesting is I think that's the fundamental difference between humans and these artificial intelligence agents. Ultimately the, that this is a tangent now, but this is something that's really on my mind and I meant to post about it on LinkedIn but I was too busy. I think the fact that humans have emotions is we have emotions really for evolutionary reasons. We had a drive to survive, drive to reproduce the inorganisms that wanted to. That had a drive to reproduce. They did. And the ones that didn't, didn't. And eventually over long periods, these deep feelings that are much older evolutionarily than some of the cognitive processes. And basically the AI has gone right to the cognitive process, and it doesn't have any of the emotions under the hood. And I think that allows us to make value judgments. And I think that's what's going to keep humans around as useful, because someone has to tell the machines what to do. Machines don't care at all. They'll do anything. They'll very happily do anything. But they don't have any compass. And we have a kind of emotional compass. And I think that that is the difference. I really don't see humans getting replaced by machines. I think that I see them getting supplemented by machines because somebody still has to drive the ship. And you know, okay, you can set metrics. Hey, we're going to optimize for this, we're going to optimize for that. But someone has to set the metric. You know, I mean, at the end of the day, there has to be a judgment, and only humans can do that, because I think it's the emotions that drive judgment. Okay, well, that was my tangent. So now back. Back to you. But that's been on my mind a lot. Okay, so getting back to the emotions, I think something that I've been thinking about is LLMs, they're great. AI in general is great at personalization. And I think that we actually sometimes have the opposite problem where we're getting answers from people that are personalized, and to some sense, we want to depersonalize them because we want to generalize it. I've been thinking recently about the need to collect as much text information from the consumers that early in the study, have a conversation with them with a chatbot to get a sense of how they talk. Because then later in the study, when you're trying to make sense of their answers, it seems to me that it's very useful for the model to know something about the person that you had. If you have a whole bunch of people that have had some general conversations about their day and about various topics, what you're really doing at that point is anchoring your model. You're saying, okay, like, tell me about the last time that you had coffee, for example, and they talk about it. Okay, now tell me was that, you know, there's a little conversation kind of warming somebody up, but what you're really doing is you're actually collecting data on how they talk. Because if those stories are more or less about the same sorts of experiences, different people will talk about them in different ways. And some people are more effusive and some people are more reserved. And then when it comes time to do the emotions, I think that we need to feed that context to the model. We need to say, okay, this person tends to talk this way. Here's what they had to say. And then there's some interpretation. Right. And then you've got the next person and you again, have the same anchoring. I think that this going, the opposite of personalization is very interesting. So I don't know how, if that's something you've really thought very much about. I mean, what are some of the ways that you're thinking about using LLMs kind of in your, in your survey work?


Dal:  I think, I think LLMs are more. You can get input from a larger and, or a larger number of people than you can. And so as you were, as you were talking, I was thinking of PLs or focus groups. And that was kind of one of the things I alluded to with the combination of, the combining of methodologies to be able to get more powerful information. The, the, the form of qualitative. We're actually hearing what consumers are saying comes in the form of, of peel off. So, right, they've tasted. We have, you know, 20 people in a session. Let's say we peel off maybe five of those people, and then we have them come into them, we have them talk about the products. And those numbers are not big enough. You got to get a, you kind of get a little bit of a trend as to what they're thinking about the products that they just tasted. Yeah, it won't be quantitative, and it may be responses that you'll get or even questions that you ask that you cannot ask in a questionnaire because you just can't. You just can't. You can't get to, you know, the specifics of what you're trying to get at with a question that you would get in, in pilafs and you would. In a, In a, On a questionnaire, on a ballot. So that conversation that you're having with these five or six different people is not enough to be able to take everything that you would get. You got me. You can get a lot. You can get so much more responses from LLMs, and you can, you can. From a qualitative group is basically. Qualitative group is basically what I'm saying. So even at the end of the day, even if you do four groups, it's still only 20 people. But when you have 120 people that are coming through, you can use the voices of those 120 people without having to have all of them sit in a room at any given point to talk about those products. So I think, and I haven't gotten to that point yet, but I think that the quantity of people or the quantity of responses makes it that I would get through LLMs is making the recommendation or making the summary or making the results more meaningful because you have more people that are telling you what they feel or how they feel, whether it's emotions or whether it's just whether they like it or not like it. So I think the biggest thing to answer your question is basically the, the number or the magnitude of people responses that you get would be much greater with.

John:  Yeah, no, that's right. And I think. Well, you know, I've long been a fan of voice activated surveys, smart speaker surveys. That's something that, you know, we had, I think, one of the first platforms for smart speaker surveys at Agora, but really back in those days, Alexa was quite limited and the models weren't very good. And so, yeah, it was useful. We could collect data in real time. But I'm really excited to get back into that because I can see a world where people come into, you know, the study and then they go home and that night, you know, or the next morning when they're drinking coffee, they get a notification and they have a conversation, you know, voice conversation about, you know, their experience. Or maybe they just, they leave and they continue the conversation in the car, you know, or they just go in the other room. But you can have these voice conversations. And the new models, the multimodal models are able to detect emotion. You know, I do every morning I do my journaling with voice and one of the things I have it analyzed is, you know, how am I feeling? And it does a very good job at that actually of just from my vocal tone from day to day, how am I feeling? So I think you're right. We're going to be able to go wider with more people, but I also think we're going to go deeper and we're going to be able to connect. Yeah, it's fascinating. Time. Okay.


Dal:  You know what's funny is I think of again, this might be a little bit of a tangent too, but I think of when I worked at Unilever and because of the nature of their products, I felt like so much information was left on the table and so much information that wasn't collected. And one specifically I think of. So back in the 90s was when I worked on Unilever and I worked on Axe Body Spray. So we had these home use. We have these home uses everywhere.

John:  Thank you.


Dal:  What's that?


John:  Frat boys everywhere. Thank you.


Dal:  And their mothers.


John:  The girlfriends. Thank you.


Dal:  But the girlfriends too. But what was funny was that we would do these home use tests and we'd have these, these guys, you know, the prime ages, you know, from, from 16 to 22. And they would come in and they pick up their bottle of AX and have it in the bag. But when they came in, when they walked into the building, they already smelled like AX to high heaven.

John:  Right?

Dal:  Like, they probably put on like a, like a whole bottle of ax their hairs. And this is back again in the 90s where they had their frosted tips, you know, and they're wearing their gold chains and they're wearing their jewelry and they're. These are young guys and they're coming in. We're just asking them about how they like, they like the smell, how they liked the use of the, of the, of the, the bottle. But there was that emotion part of it that I was thinking, like, how did it make them feel? Like, why did they feel like they needed to use as much as they did, number one, or, or why did they. They use it for special occasions? Like, we never got that kind of really key information that I would love to have gotten. And I think that, that if, if, if, if chatbots had been available back then, we could have gotten such rich data from it. But, and how it connected to their, their lives, you know, the frosted tips and, and the jewelry and all that, because I think that's part of that connection and how it made them feel or how it would make them feel, how it wouldn't make them feel. So I think that in a lot of ways, I think that, that, that it will really. The trajectory of, of getting research and getting research that can be valuable and making a recommendation will probably be straight up. I mean, it would just be like you'll just, in the beginning, just be able to get so much information that people won't really, but won't really know how or figure out how they were able to get by without it in the past.


John:  No, I think that's exactly right. So you collect a lot more information, but you'll be able to make sense of it also. Because LLMs are great at then distilling and finding the patterns. Yeah, no, I mean, there's so many Things that I want to do. Right now, we have a lot of expertise in chatbot engineering for my startup. And so we're bringing that now into Sensory. And something just very simple that we're doing is putting a discovery like, you know, people go to our website, they can click on it and it says, contact us. Well, that's going to be replaced by Chatbot that will have a conversation with somebody and help. And the whole point is that chatbot has been trained to get the information we need and to try to qualify the. The client or the potential client. And then we can take all those conversations, we can put them together, we can analyze them, we can look for trends. What are people asking about? Maybe there's services a lot of people are asking about that we don't provide. You all may incidentally pick that up in Europe and where actually you were asking about one product. But you discover through the analysis of these chatbot conversations that actually, you know, there's a need out there for something that we weren't even thinking about.


Dal:  Yeah.


John:  So it's. We call, you know, currently we call these things open ends, but they're not really open. They're not conversations. If you really open that up, you might learn all sorts of things about this demographic that isn't, you know, might even suggest a new product to you guys.


Dal:  Exactly.


John:  Yeah. So, all right, so what else do you kind of see? I mean, something that I'm hearing a lot about in a digital twin. That's something that I'm hearing a lot about coming back. I'm kind of counting on you to help me get up to speed. I've been doing my research. But what are some of the other trends over the last three years that you think are going to continue for the next two years here?


Dal:  So one of the trends that. It took a lot for me to get this ball rolling, and I know that there are other companies that have used it and just these isolated incidents of companies that have used it, and I know that it's been really valuable to them. But, but the online community, which is something that. That I don't know how familiar you. How familiar you are with them, but the way that. That your average online community is set up is that you have roughly around 50 people in this community, and you recruit them and you vet them for. For how. How comfortable they are speaking in front of people, how comfortable they are with using social media, how comfor. Talking through products, but also you want them to be more forward than your average consumer in a specific product. So you're not recruiting your average person to be on that online community. Let's say it's 50 people, right? And 50 to 100 people that's in the community. But the vetting process is stringent enough that you're getting people that know more than the average consumer about that specific product. And so what you can then do is throw out into this. And it's a closed Facebook setting for a closed chat. So it's basically not anyone can get into it except for those 50 people that you recruited embedded to be in this, in this community. And in this community, you then throw out these topics of discussion. So you can do it weekly, you can do it every other week. And the way that we do it is every other week and you throw out this topic and then they can just type in as if they're on Facebook and they can type it in at their leisure, right? So it's for two weeks in this community and they can talk through whatever, whatever topic we throw out. One of the things that I'm, I'm thinking of, and I don't think we do this yet, but if we throw out a discussion about, let's say it's about a specific beverage and people can talk about, oh, I like it, and I like it when I put it, when I put this in it, or when I, when it has oat milk in it or when it has dairy in it, or when it, when it, it's just, it doesn't have any, you know, I just like to drink it straight, let's say. And using a chat bot, I think would be helpful because we do have a person that is a moderator of that discussion. So sometimes in the very beginning of the online community, people will want to throw out a political statement, you know, throw out something about one of the, one of the candidates and then the moderator go. Does keep it on subject, you know, this is not. And we'll delete that. That political statement that someone had. And so this is about X. And so I think that having AI as part of that moderating, you can probably even ask questions because we don't have that person, that actual physical person on moderating 24. 7. They're just on their business days and then they see something happen overnight. Because sometimes people are on the online community late at night, right? Because they're thinking, oh, this is where. This is what I think about, about this store. This is what I think about this product at the store or, or whatever the conversation happens to be. The moderator won't have a chance to read it. Until the next day. But if you have AI on constantly, they can actually monitor and moderate these groups. Yes. And I think it could be really valuable. So I, I, and I'm just thinking about, I'm even really kind of talking out, thinking out loud. But I think that there could be a lot of advantages to, to having AI monitor these online communities. Because one of the things about online communities, I kind of want to go back a little bit to how I feel about online communities is that I think you really get the direct voice of the consumer, but not your average consumer, like I said earlier, I think I said it twice now, your average consumer. But you're getting it from your consumer that knows more about your product than your average consumer does. And so they're going to be more opinion about it because there's this emotion, again, emotional come into play. There's going to be this emotion that they're feeling about the product enough that they're more involved than your average person is about, about, about that product. So I think that, I think that we're onto something if it hasn't already been been done. But I think that AI could be a big part of these online communities. Because the other thing about the online communities that I think is really positive is that of these people, 50 people that are within this community, they're from all over the United States. Right. So sometimes we'll have conversations with them. We'll like pull, randomly pull five people into a conversation, sort of like a, like a focus group style. But of course it's all virtual. And we'll have someone from, from Anchorage and someone from Miami and someone from la, someone from New York, someone from Boston. All these people enter in a room talking about a product and you get this different point of view. Whereas with pilos like I was talking to earlier, even just focus groups in general, they're all from that location.


John:  Right.

Dal:  So you're getting that point of view of that city or that, that, that, that specific location. But in this case of online community, you're getting it from different, from different parts of the US which is, I think, really cool.

John:  That's right. And you know the, you know a fair amount about these people. And that can go into your analysis.


Dal:  Exactly that. You know that too. You're right.


John:  Yeah. This is great. Okay, well that's really good. It's definitely impossible to build bots to moderate Facebook groups. And a lot of people think they can tend to confuse chatbots with the large language model. And it's not a chatbot is really a system where you've got an interface for.


Dal:  Yeah, tell me the difference between the two.


John:  Yeah. A chatbot is a system where you have an interface that collects text or, you know, input from users. It does some things which could involve a large language model, and then it returns text or action or whatever to the users. Okay. But as far as the front end goes, you simply have a text field. Text goes in, text goes out. If it's just a basic chatbot. But under the hood, you have all sorts of processing where this text comes in. And the first thing you have to do is you have to try to figure out what the user intention is, because there can be different processes that get kicked off. It may be the case that there has to be a search to get more information. You may have to search some knowledge base to get information to answer the question. And so very typical flow would be like, we have a chatbot that. It's a kind of helper chatbot for web3 game. That's one of the services my other company provides. And when people show up there, they might just want to know things about the game. Okay. Chatbot takes the request, it does a little search, pulls together information that could help answer the question. And then there's a second model call. The first model call is person said this. What are they trying to do? Okay. Gets categorized. That lets you know which process to kick off. Comes back. They're looking for information. Okay. Now we do a search based on what they said to figure out what information we need. We put that together. We have another model call. The second model call says, if this is the question and this is the information, what's the answer to the question? Okay, so now we get the answer. So under the hood, you've already had two LLM calls. One to figure out what someone's trying to do, the other one to answer their. Then you had a search, and then you have this second call to answer the question. Then it gets returned to the user. They might want to buy things. We might need to look at a marketplace API to see what's for sale, and they might want to sell things. So we have to look again to see, you know, they want to buy and sell. There's also things that might happen. And in your world, with the moderator, the moderator can be coming in, and the text is coming in, and the monitor, anytime new text comes in, there's some sort of. Basically, you can call it surveillance. It sounds kind of not good to say it like that, but it is. The bot is surveilling the group and now the question is, all right, what's happened? One thing that might happen is someone has made a political comment and then the moderator has to gently remind the group, we don't talk about politics in here, you know, or you know. And if it happens a few times, you might kick off a process to mute the user for some amount of time. If there's somebody does something really bad, they might get a warning and they might be told, look, you can't have this in here. You know, maybe two people are getting a fight, call each other names, maybe they have to go to timeout. So the point is that. And the bot is going to take the history. It's not just going to take the latest thing that was said. It's going to have the whole history. It'll have the context, try to figure out what's the situation here. And that'll kick off different processes at some point. Now, it could be that like everything that comes in, the bot analyzes, but it will be some. Most of the time, the moderator probably shouldn't say anything. People are just talking and it would be annoying if every time you talked the bot responded. So there's an analysis. What's the threshold for response? If there's a case that something's going really off the rails and you need to get a human moderator in there, maybe there can be a notification system where the bot is doing is watching. It's like it's getting out of hand, let's ping, you know, whoever. And then somebody gets a buzzer in the middle of the night and they have to get out of bed and moderate a Facebook group. Or, you know, it could be that you have different people at different time zones and the person in the right time zone gets contacted, hey, have a look at this conversation. Something going on here. We need your help. Could be recommendations, someone are saying, hey, you know, whatever. Does anybody know what this is? And the moderator can do a search and say, you know, it can wait a little while, no one answers it. They can answer the question. So that's what I think a lot of people don't understand about chatbots, is they are systems. They're not. The models are pieces of the puzzle, but so are database calls, you know, so it could even be, there could even be some analysis. There could be that you run a machine learning model, someone says some things, you're going to predict what they think, and based on that you have different answers. So it's really a whole system. And I think that's where when people say that you know, there's no place for, like, consulting. I don't think that's true. I think consulting is going to mix together now with software, and you're going to have these engineered systems that are actually very niche and those will provide value because, okay, information is getting to be relatively cheap. Coding is getting to be relatively cheap. But this engineering, this architecting of solutions for specific use cases, that's something that I think does really require the human touch to figure out. How do you make this bot act like a human moderator?


Dal:  I'm writing this down.

John:  Are you writing it down? Okay, well, let's talk later. Honestly, that's a problem we could help you with. So if you're interested, I'd love to work on that problem. I am happy as a clam these days, Bell. Like, oh, my God, the startup world is so hard. It is nice to be back for. You know, I'm back because Danielle had to step down for kind of personal reasons, so they needed somebody to step in at Igora. And I am so happy because basically everything I was talking about when I started Igora is now coming true. And I feel like a kid in the candy store. I cannot wait to come in and open up AI Studio and do. I mean, it's amazing how good the models have gotten. And you just. It just feel so free.


Dal:  It's just.

John:  And I think we're gonna free. Feel free as researchers that, like, if you can think of it, we can probably do it. You just have this idea. Let's moderate the Facebook group. Well, it's doable. Yeah, yeah. It's just amazing. All right, well, we actually. I could talk to you all day, but we're at time, so any parting advice? I mean, something that I think is really on my mind is how should people position them? So what should they be doing right now to get ready for this essentially industrial revolution that we're going through, you know?


Dal:  Yeah, yeah. I think that. That we kind of touched on it a little bit, and it was something that I did note is that it always comes down to the objective of your research. And I think that that is probably the most important thing. And I think a lot of people always forget. A lot of people always forget that. And even when you ask them, hey, what's the objective of your research? No matter what we're. What our conversation has been today, it is really not as important as the research objective. And why are you having this conversation in the first place? Why are you considering all of these different methodologies or considering what is the right methodology if you don't know what your objective is, and I think the, a lot of times when you have at your fingertips all of these different methodologies that the research can become, or the thought process of the research can become so complex and that the objective, the true objective can't get lost as to what the objective is for the research. And as a result of that, you get. I don't know who said this, but I know that someone said it along, along my career, maybe, probably my academic career was that garbage in, garbage out, right. If you are collecting crappy data because of the fact that, that you don't have your objectives straight in the beginning, then what you're going to get as far as what's going to be analyzed and what's going to come out at the back end is not going to be helpful to you at all. You're going to have, you're going to find out that you've done all this research for nothing. And so I think that, that getting the objective right is something that is. I don't know why it's, it's why, why it's been lost. I mean, even like with writing a thesis, right? He's like, you got to figure out what thesis is going to be. It has to be very specific and it has to be very narrow. It can't be too big because you're not going to be able to finish it if it's too broad. So I think that's part of the, the message that I have is that regardless of what the research is going to be and where we're heading as far as the methodologies or the types of research that we have that we will have at our fingertips, if you don't have the objective right, you're not going to be able to use all of these really cool ideas or these really cool methodologies that are coming down the pike.

John:  I totally agree with that. I think it's actually a danger for people who aren't focused that now you can do anything and just end up doing everything and don't accomplish anything. Yeah, that it. Yeah, that. And that is what the humans give. The humans give the direction. And so you have to, it's up to us to decide what's the right direction to go. So yeah, that advice to, you know, get clear on your objective. What are your goals if you don't know, the machine will do anything you tell it to.


Dal:  You don't know what your goals are. Yeah, the machine won't know. I feel like there should be a course in writing the writing an objective like someone should have. And maybe there is one that that does exist, but I think that that's something that's lacking is that a lot of people in general, not just product developers, but just I think in business, they just need to know objective for any. Is it for any research?


John:  Yeah, that's right. And that's. I think one good use case for LLMs is say to the chatbot, hey, I'm working on this project. I have these ideas. I'm not entirely sure what I'm trying to accomplish. Ask me questions until it's clear what I'm trying to do.


Dal:  Exactly. Okay.


John:  And that's a type of prompt engineering. Prompt engineering is really important topic right now. Making sure that you have very clear prompts that where your machine is going to do what you want it to do. And I always tell, you know, my whole team, I told them this morning, when you're working with an LLM, you should always have two windows open. One where you have your main work, the other where you have a helper, you have another chatbot open. And it is helping you get your prompts clear. Only put the prompts that you're. That are clear into the main work. So anyway, Dale, it's always a pleasure to talk to you. I know we're at time here.

Dal:  Yes.


John:  Any, any people can reach out to you, find you on LinkedIn. What's the best way if they want to.


Dal:  I am not on LinkedIn.


John:  That's right. I remember this last time. Yeah, yeah, yeah. You're like the only person in the world.

Dal:  About it. But I haven't been on LinkedIn for like 15 years and, and I don't see any reason to go back on it again. So, I mean, same thing. If, if they need to get a hold of me, I would like to get a hold of me. They can do it through you and then you can pass me.


John:  Oh, there. Okay. Yeah, that sounds good. I'm glad you're not. I've been getting killed on LinkedIn lately. I criticize European vacations and apparently, that is not supposed to do so. I got crucified.


Dal:  Makes me want to join LinkedIn just to hear what you're saying about it. Oh, my God.


John:  Yeah. Anyway, all right, Dal, a pleasure. Thank you very much.

Dal:  Yeah. Anyway, all right, Dal, a pleasure. Thank you very much.


That's it. I hope you enjoyed this conversation. If you did, please help us grow our audience by telling your friend about AigoraCast and leaving us a positive review on iTunes. Thanks.


That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!


Join our email list!


Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page