Sara Jaeger - Bridging the Gap
Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!
AigoraCast is available on Apple Podcasts, Stitcher, Google Podcasts, Spotify, PodCast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!
Sara trained in Denmark and the UK and currently leads a team of sensory and consumer researchers at the New Zealand Institute for Plant & Food Research. She has an extensive international research network and is particularly interested in consumer behaviour and research methodology. Sara has worked with several global food and beverage companies and is a regular speaker at conferences. Sara co-chaired the 9th Pangborn Sensory Science Symposium and currently serves as editor for the journal Food Quality and Preference.
Transcript (Semi-automated, forgive typos!)
John: So, Sara, thanks a lot for being on the show today.
Sara: Thanks, John. It's a pleasure. I look forward to it.
John: Yeah. Definitely. So one thing that kind of unites us I think is our interest in methodology. And so I think it'd be a good place to start for the listeners who maybe aren't familiar with your research, to kind of talk big picture. You know, what are your thoughts on kind of the key questions when it comes to methodology within sensory consumer science? What are the things that you're kind of interested in and what do you think are the questions that people should be looking into?
Sara: Well, personally, over many years, I've been very interested it through my work in New Zealand, where we do a lot of product testing about how do we do that well with consumers? What are some of the methodology and the whole development there around? How do we test with consumers? What are the types of information we can obtain from consumers? And I think one of the biggest things that has happened perhaps over the last decade or a little bit more that I hope to have played a small role in is sort of that blurring of the lines between trained panel work and product characterization and descriptions with consumers where it's now fairly well accepted that consumers can perform, for example, sensory characterization and product tasks to a reasonable degree. And we can use something like Keta methodology, which is one of the things that I've been interested in. So I think actually a big change and thinking in terms for the field, because we know now that we can use consumers to many things. And I think that opening is looking also towards this whole idea around looking beyond liking because the consumers used to be, you know that consumers we would ask them about liking. That was what they knew and nothing else. And now they think much more broadly. And I think that's really opened up the field in many more ways. And it's not so only product specific focused on the sensory characteristic focus. That's a much broader focus around product experience in the term. And I think that's a really positive shift for the field as a whole.
John: Yeah, now, I definitely agree. I mean, to some extent, that's been the extension of sensory science into the field of sensory and consumer science. And you played a support role in that. And I think, that's right. I mean, that's what's interesting to me. I mean, the whole theme of the show is the use of new technologies to how our new technology is impacting sensory and consumer science. So in that way, what do you see in terms of understanding the consumer experience better? What do you see that are some of the opportunities that new technologies might be affording us in our research?
Sara: I guess it's hard to know, but I think one of the things that's been interesting to watch over the last couple of years, a little bit more, is VR and the technology around them. I've personally been quite interested in the influence of situational context and how that plays a huge role in the decisions as consumers have around what do we eat and drink. I mean, cereal is the classic example. Most people like it, yet they don't eat it for dinner. Why not? And that's where, you know, because it's not considered situationally appropriate as an evening meal, despite the fact that it's very well liked. And trying to understand how situational context influences product experience and purchase decisions. And that's sort of been this time around and consideration around. We need to move out of the sensory laboratory. We need to increase ecological validity. And that would be by observational data. It would be by doing home use tests. And I think something like VR has been put into that sort of broad box of things that could be cool to do there. And I think that really still has to show its promise. I think it's hyped and there's probably a lot of potential, but I think we've not really yet really seen what it can do. And I think it's a shame because I think there's probably some very nice methodological work that can be done within the realm of VR to look at how do we systematically change eating environments, eating conditions, and look how individuals respond. The classical issue in situational research and her musclemen is one of the people who have been done very well in pointing that out, is you can't really take the same person and put them in a local diner and then in a high fine, high end dining restaurant, because the people that eat in the high dining restaurant don't eat in local diner generally and vice versa. And so when you take a person out of that context, it changes the whole eating experience. But by systematically varying through VR and those simulations, you might be able to begin to sort of one step at a time, figure out what are some of the varying factors that influence this by doing it in a systematic and controlled environment. And I think that would be I would love to be able to work in that space. And I think there's a development that can happen around there.
John: Yeah, I mean, so you touched on something that I see these kind of dichotomies that are becoming less dichotomous and one of them is the tension between ecological validity and scientific control or scientific design, experimental design. Right? And I think they really like what you just said about the possibility that through virtual reality, you may be able to manipulate things in a scientific and kind of statistically sound way, but keep some of the ecological validity. That reminds me of another topic that I think we're both interested in is this idea of bridging quantitative and qualitative research that there are always been these kinds of two schools of research within sensory and consumer science. And you have some really interesting ideas, I think, about how these may not, I mean I like to hear you kind of talk on your thoughts on you know qualitative research versus quantitative research. What are some of the differences? What are some of the similarities? I mean, how do see that unfolding in the next few years?
Sara: Well, again, I think it's going to be another one of these that the lines are blurring, I think we are probably to a large extent, a university still taught that it's a dichotomous type thing. You're either doing qualitative research or you're doing quantitative research. And we're generally probably the traditional quant research. Imagine a survey is not going to become qualitative that way. The opportunities we have now with text data, with interviews to obtain large amounts of data and then put some quantification around that I think is going to move that stuff in a certain direction, which I think is very interesting. You have the scope for that, and so I think text as data is exploding, not just in our field, you know absolutely everywhere. And that's certainly going to have an impact. I think what will be very interesting to observe, it's what's going to happen with what you would traditionally consider sort of those that pure coal research. For example, imagine, you know, ethnographies, those really depth interviews where, you know and I've seen this is one of the papers that always sticks in my mind, a journal of consumer research which is probably regarded as the hardest consumer research journals to get into. And apparently there, there is a paper that's published that has just a single research respondent. But over time data and that notion of having not needing many people, but working with them in a huge amount of depth, is that going to continue to be regarded as a valid way of obtaining information when actually we have access to many people's data in a lot of detail and we can extract that out? So I think that's going to be I think there'll be some very interesting shifts or debates around what is the value of some of these things. And I think that's and again, it comes very much back to I think, to the training that we're brought up with, because it really is very sort of dichotomies. And I certainly experienced myself that working with some coal researchers, they scoff at quant researchers and the other way around, you know, and it can actually be quite hard to find middle ground. And perhaps this use of text as data is an opportunity to gain some middle ground, because I do think we have things to learn each other. There's never a panacea. There's never a golden method. You know, everything has pros and cons. Everything has something to offer. And that’s nuanced perspective that I think as a person who's interested in research methodology, you can't work like this without saying there's only one method and that's the right method. And that's what I'm going to push. You know, we always knew that you choose methods and you have to weigh these pros and cons in response to or responding to what are your research questions, what the context are and what the situation is and around the topic and the work that you are doing because it varies from time to time.
John: Right. Yeah, that's right. I think that's a theme maybe through in both of our lines of research is the idea that you should be able to pick the right tool for the right situation, right? Yeah.
Sara: Yeah. And I think educating and for people to be educated than that. And you will see often at conferences and papers, people talk about expanding the toolbox that we have available to us. And I think that's great. But it's only great if an expanded toolbox allows or makes people make better decisions around what is the good research methodology for a certain question.
John: Right, yes, you have to understand your tools, right? Just you know, you get some fancy tool, but if you don't know anything about, say the measurement properties, I mean, a lot of these things are basically instruments for collecting data. Right? And so you need to understand the different properties.
Sara: And that's something that I think many people don't have a basic understanding of measurement properties in data on some of those types of things, which is a shame. And again, you know, I've worked with Herbert Meiselmen and Armand Cardello, and people like, you know earlier system of food quality and preference and one of the some of the conversations we've had there is sort of this that I think that our field as such would benefit from a larger number of people trained in psychology and psychophysics because they're trained in some of those things, but also have the human perspective and actually that measurement angle in there, I think some of those people are trained to think quite differently from a person that comes through, you know, in food science and technology type background.
John: And that actually reminds me of some things we were talking about before the call, the difference between data and reality, right? The data is a representation of reality, but the numbers you get or the you know, the quantities that you're trying the text data, I suppose you, you know that the data are not the same thing as the thing you're trying to measure. Right?And if you can measure the same thing in multiple ways and end up actually with different measurements.
John: So, yeah. So you're absolutely right.
Sara: The research methodology perspective, you know, triangulation is something that we're interested in. Right? There's this idea, you know, that you're you're looking for a true signal. And if you can get it from multiple data sources, multiple research methods, they all point in the same way, triangulation, then, you know, you have much greater confidence in your data. And I think those are some of the things that I would like to see people think more about and give more consideration to. And when we talk about how could the field improve, how could we improve and heighten the quality of our research and our thinking? I think those are some of the things. And another thing that I think is we are seeing, especially because our area is opening up, is this whole area around hypothesis development going on theory. And that's going in from other areas to think about critically about what are we trying to do and then choose the methodology that's right for the question. While starting this methodology and say, okay, what should I use it for?
John: Well, that's right. Yeah. And you always whenever you have new technologies, you have this solution is an issue where people have something cool and they want an excuse to use it. Right? And so that you definitely have to make sure that it is be for us science first. Right? There's a question we're trying to answer and then you would like to do when you have a new technology like we've been talking about smart speaker surveys, for example, we need to understand that, you know what are the properties of this new way of collecting data? Anyway, that's its own topic for another day. But we're definitely working on that. So I would like to hear just a little bit of your thoughts on the cultural differences between qualitative and quantitative research, because I thought that was a really interesting point in our kind of preliminary discussion that I hadn't really thought about the fact that, okay, you've got these different methods, but you also have kind of different ways of looking at the world. And so kind of could you give us kind of some brief thoughts on how the quantitative researchers approach working out of qualitative research? Or maybe what can they learn from each other?
Sara: Yeah. I think one of the things that I personally have learned from working with qualitative research is I think it's this notion of or I guess absorbing the skill that it takes to perform qualitative research well. You know, if you think about a focus group or an interview, you know, it's pretty easy to write a set of questions down and then go and ask somebody the questions. But as these things they're doing it well is in terms of listening, well, not necessarily interrupting all the time and then being able to with clarity in mind and purpose of what it is you're trying to study and what your research questions are probe in the right way. And I think that is something I think that very skilled and experienced qualitative researchers do very well. And I've always really admired that, because when you then stand in front of a person that is not skilled and is not well at doing it, you can see how they actually can miss huge, important things that might just be said, you know, as a throwaway type comment. But it's actually important or by the way, that they are engaging a leading and biasing the conversation. And I think that to be mindful of that and I think if you are mindful of those things, you take that with you as a qualitative researcher and say, am I also mindful in the way that I write? For example, my survey questions. Now, we know again, it's something on paper. It's easy to write a questionnaire, but and they will also always give you data. You can always analyze them. But actually, have you written the questions well? And have the questions in a way that allows people to answer them well and to get really at what you're trying to get at. It's actually it's very difficult to write a good survey and a good questionnaire.
John: I definitely agree with that, definitely. I actually thought about that the extent to which there is real subject matter expertise and say, a focus group leader or qualitative research. There's a tremendous amount of subject matter expertise there. But I think we have to be careful. Like I see this with my clients. They have online communities where people they have I don't know, thousands of people are in a community. People can upload pictures and they can write little notes. And so they have a lot of data and they kind of have the idea that through good enough analytics, they are going to be able to get the same kind of information that a skilled qualitative researcher might obtain. But actually, listening to you, I'm really questioning that because you're right that there is a skill in terms of probing and following up. And it may not be enough to just have all this observational data.
Sara: For us working with food and in sensory. A useful analogy, I think, is thinking of the trained panel moderator. Nobody in their right mind would say that anybody can go in and moderate and train and maintain a trained panel well. Anybody who's done that knows that that is very, very difficult to do when it's the same kind of thing sure as a moderator on a train of sensory panel, you've got to have a lot of sensory acuity. But actually getting your panel to work together well, getting that information out of them, listening to them, what is it they're experiencing? How are they describing the differences between the foods and pulling all of that together? It's a similar type of skill set in there, and that is very hard to do well as well. And possibly in part, you know, we started off by talking about, for example, this blurring of the lines and catch a questions and the cost of maintaining these trained panels is huge. If you really want to do it well and have well-trained panels, it takes a lot of time. And you know, you can't train a sensory judge, you know, over four weeks and then say that they really experience because actually they're not.
John: Yeah, this is yeah, this is really, you give me a lot of things to think about Sara, in terms of because there's a tendency when there's always this urge to just try to replace things with the new technology, but actually in the long run, the things that are most valuable are the tools that help people to do their jobs better. And I think that's something that I really I actually have some calls coming up next week with some qualitative researchers who are interested in how they can use, say, speech to text your transcription app. And they used those kind of text tools. And I think that there is, I think, a huge opportunity there to support these researchers. But with these tools, make their jobs easier, maybe help them reach a wider audience, especially now that people are on Zoom and whatnot.
Sara: And, you know, we're thinking about the research methodology and that sort of shared interest. One of the things, you know, when you become mindful of, you know, these small thoughts and all of these types of things, you know, I think it would be hugely interesting to see what would technology, what kind of interviews would technology or AI or whatever be able to do compared to really skilled qualitative researcher and actually put it to the test in some sense, what is required, both with a view to looking at the possible bias of the interviewer, but also, I think to highlight the skills that is really required. Because in qualitative research and I think especially if you think interpretive research where, you know, I guess there's this notion that you as the researcher is part of the research process and you can't really be separate from the research process, then the way you interpret your data is also through the lens and through your own personal experience and knowledge and expertise and sort of that perhaps from some natural scientists is quite a hard way to think about things when we're used to things have to be objective and not biased. But there is a whole other research paradigm around some of this the way and I guess the same with ethnographies and sociology and anthropology that think very differently about understanding, meaning making. I think that's one of the terms that we often hear, qualitative research from those types of fields use. And it's about how do we as people make meaning or the meaning that we extract from food from, you know, a dining experience, from buying products in a farmer's market or whatever, you know, or respond to the use of technology in food production, whatever it might be. What we were added after is what that means for us and how our relationship with those types of things are. And they can require a lot of experience and I would be really interested to see whether those nuances that's required to do this well, where how far along is technology in some of those aspects, and even I'm not necessarily expecting that they could replace them here now, but it's opportunities for synergy to do things together better and use technology. You know, in some way, perhaps,yeah, perhaps can they pick up on things, you know, I've heard you mentioned, you know, that perhaps through speech some of these technologies can pick up on undercurrent of emotions. Could you have an effect as you're doing that because you're so focused yourself on asking the questions, somebody gives you information and from the side, perhaps probe or perhaps the person wasn't quite happy with the way we ended that part of the conversation or, you know, so can some things be synergized? I think I think there's an opportunity and I don't know where it's going to go, but an opportunity to improve quality of research data through interacting with technology and to inborn technology. I think that would be huge exciting.
John: This is really fascinating. Yeah, because now you're making me think about the possibility that you could have a human interviewer who is talking to a respondent. Respondent has some wearable devices on and the interviewer has access to information like, oh, the person's becoming anxious, their heart rate is come up. Right? That you could be talking to someone. And especially as you have more of the you know, for example, the glasses that can give you information in real time or whatever, the interviewer could be getting extra information that would enrich their ability to conduct the interview. Yeah, this is really exciting, Sara. This is why I do AigoraCast. To get these kinds of ideas.
Sara: But again, you know, you would say, well, okay, that hinders the ecological validity, but not so much. So if it allows you a deeper and deeper conversation and going deeper in with people as you a skill qualitative research that builds that trust and that you know, that shared bond. Because often if you think about an interview, what you and I are doing now, I like to think of it as a conversation on an agreed topic. It's not me asking you questions or you asking me questions? The whole I think the real value and experience in a lot of these deep qualitative methods is exactly that, it's a back and forth. It's a shared thing that we developed together.
John: Yeah, definitely. Actually, it's interesting, you know, that we're also on Zoom right now because we can see each other. You know, the fact that we can see each other actually is making the conversation go better. So if we could just enrich it more, information could be even better. Right?
John: So that's fascinating. Yeah. Okay, great. Well, we are actually almost out of time. I do have a couple of other questions I want to get to because I thought you had some really interesting ideas about, I actually never thought about this that you and I both know that when you do, especially online research, there's always some percentage of people that are going to essentially flat line. And I think it would be good to hear your some kind of neat ideas about how you might use technology to aid in your identification of someone who maybe isn't engaged in the study and what are some of the, I hadn't thought about this honestly and so I just wanna make sure we get to it that the use of technology at a meta level to ensure that the data collection is happening, some sort of, you know, reliable or valid way.
Sara: Well, I think it is what you've just said. We know that this is a problem. And online surveys in particular, you know, some people are just clicking through it. And our ability to screen these people out is very crude. You know, we would use measures such as, you know, if they've completed the questionnaire too quickly, they're not engaged. And it's\ just, you know, it's really just pie in the sky type thing. You know, we just make some ad hoc things up or we say if they've given the same flat line response for more than nine responses, 10 responses in a row? We're going to cull the data. But, you know, sometimes that's a reasonable tool. But there are also I mean, you could easily imagine writing. I could easily write ten attitudinal questions or statements that it would be absolutely acceptable to provide the same answer on. For example, neither agree or disagree. Right? So they are just ad hoc rules. And if we really want to rule those people out because they essentially contribute nothing but noise, if they are really not engaged in the task, how do we identify them? And what is this that technology can do? I don't know. I mean, we know I know that some of the things that's happening is looking, for example, at response speed. How long are people how are they typing? What are some of the keystrokes? Does that reflect, you know, just hammering without looking at the keyboard or is there something in the way that the keystrokes happen that reflect the people are writing considered or thinking about it? I mean, I think there's lots of opportunities there. I don't know really a lot about what they would be, but I could see that those opportunities are there. And I talked a little bit from some of the work I know that people are doing around decision making when they are looking at speed to respond what they respond first on what the response second on, latencies, all of those types of things. I mean, that type of thinking perhaps could be applied in here. And it's not only a case of improving quality of the data, it's really also about efficiency in resources, because perhaps are you able to identify and screen these people out quite early and say thank you, but no thank you type thing? How do you deal with some of those things? And again, for example. One of the other things that I think is fascinating is how Internet can be used to do some things where people are paid for. You know, for completing tasks there and there they get they get paid if they're doing it well. You know, there is an incentive for them to do it well. And otherwise, if they don't do it well, they might not get repeat commissions. So how are some of those wider experiences of using some of these tools can aid us? And I think, you know, I think especially I mean, we've seen over a number of years that online research is exploding. It's not going to go away. You know it's quick and it's fast. Some research providers say that they can get your data back from a thousand people within four hours. I mean, yes, you know, for three or four questions. I mean, it's just amazing. And they kind of summarized it and it's all tested and done. You know, so those and it's cheap compared to what else we might do. And we need to think about in some of the research we do, where do we want to be in that and make use of technology there.
John: Yeah, this is really fascinating because I've always thought that if you're going to have, for example, suppose you have a webcam on somebody while they're completing the online survey, I've always thought, okay, yes, it would be good because maybe the facial expressions are going to help enrich the data. But they're also helping you with checking on the data quality. So there's kind of a theme here on the use of technology to collect kind of enriched data. You know, in fact, this is kind of a quick aside, but my wife is a professor at the local university or VCU, and she does research to help children with down syndrome. And she just did an online survey which was about the experiences of children with down syndrome during the coronavirus lockdown. Well, somehow this survey, she started to get a lot of responses. And because of the way that the whole thing is set up, there's some rules about kicking people out of the survey or whatever. So people were going to get paid five dollars to complete the survey without really much verification about who's completed the survey. So there was an issue with clearly, a non-trivial percentage of these responses coming in were fraudulent and, you know, then so she's been working on there's actually a package in R called careless. You might be interested in this, but this package called careless, which is good for data cleaning. But after this survey was closed, she got a bunch of emails that were clearly written by AI that were saying explaining they wanted to participate in the survey and it made me think the scammers have now got to level up and they're just using, not only bots, but AI to like trick the system. So, yeah, I think the more that we can get these other measurements involved, like the facial emotion recognition or whatever else, and I think...
Sara: I've heard a similar story from when we do work in China and some of the online where people come in, you know, that they're spending a huge amount of time screening and checking the data because they have an expectation that a large number of responses are just going to be random or noise. And so when we've worked with research providers there, that's one of the things that they've tried to convince us about, not necessarily without asking that they took that very, very seriously, you know, as an indication that is a problem. And I don't know whether it's a problem in some cultures more than others. That's not necessarily, but just the fact that there is a mindfulness around some of these things.
John: Yeah, yeah. This is one of the reasons I actually think that voice activated surveys can be better in some settings, too, is that right now, you know, the Amazon devices can tell who's talking.
John: And at some point we'll have access to that as a researcher, because it could be you know, we actually have had a request on this, like, can we verify someone's age, right? Can you use the voice to verify the person is, in fact, who they say they are. Right? So it's interesting. I mean, I guess it's always an arms race between the scammers and the researchers. Yeah, it's very interesting. Sara, we have amazing, I mean, I could talk to you for, I don't know, several more hours, I'm sure.
Sara: But we're half hours on the long when you get going.
John: Okay, so just to kind of wrap things up. You know, you're in a great position, I think, to provide advice to young researchers, you know, just based on your research experience, but also in your role as editor for food quality and preferences and as someone who's so heavily networked. So what what advice would you give, you've got someone just completed say their masters or maybe the PhD and they're kind of going out into the world, what advice would you have for that person?
Sara: Well, I think probably there's a very basic sort of again, almost a little bit of a dichotomy. Are you interested in working in an industry or are you interested more in pursuing an academic career? I think if you're interested in pursuing an academic career, I would say one of the first things to do is to start and see if you can get access to help reviewing papers, because it's a fantastic way to learn to see how others are writing, because that's something that's going to be expected of you as a professor. And it's learning how to do it well. But actually, in particular, it's learning not how to not do it. And I certainly remember from when I first graduated and I was lucky to work with McPhee, who was the food quality and preference editor at the time, who told me those opportunities but also said, you know, you sign yourself up, you know, and you can make sure that people know you're interested. And I've learned so much from that, from actually the way of thinking and approaching the research, I would say has been tremendously shaped by that. And it's boiled down to being almost to the point of being very sort of square and not looking outside of that square in terms of being really, really clear whenever I start. And I think that goes, whether you're in academia or in industry, be very clear about what are your research questions? What are your what are your aims? Follow that through. You know, this is what I want to information about the research methodology that I'm proposing, is that going to give me these answers and be really brutally honest with yourself. If you cannot see those direct linkages, then you're probably not quite there. You have to be able to say every single piece of information that you obtain. How is that contributing to answering your research questions? Because another thing that often happens is that we have a tendency to obtain and collect too much information that we then don't use. What do you need in order to answer your research question? And if you're able to be very sort of clear and follow that path all the way through, it kind of comes naturally, you know what are the next steps to do? And you always have the pros and cons. What are the pros, what are the cons of the decisions that I'm making here? You're always going to have to weigh them up. But that process of following that through, I think leads to better research in general, because we see a lot of things that often doesn't seem so well thought through. And if it had been a little bit more well thought through, then it would have been a lot more value from it, either for industry or in terms of novel scientific contribution.
John: Well, that definitely speaks to me, I mean, a question I often ask myself is what problem are we trying to solve? What problem are we really trying to solve and just keep continuing to drill down on that? Yeah. And not getting distracted by all the other things that you are doing.
Sara: I've found over time that it's not everybody that likes to think that way. You know, who thinks more holistic thinkers are not necessarily breaking it down. And for those people, it can be very difficult to follow that path. So, you know, the things that often makes really good researchers having together in a group of people, you know, different types of things that have both the overall level and then people that can break down and follow through at different levels. And something that I found in my own personal research, you know, I really enjoy working with people, lots of different people, new people, old people, it doesn't really matter. But people that I can see that we are on the same path to trying to solve something of shared interest. And that's why I'm working with research methodology is great because there's so many people, you know, you can work in many areas and many topics if you have a core around that. And I've certainly benefited from that to date.
John: Alright, so it's been a real pleasure talking to you today, and I really appreciate you being on the call. If someone wants to follow up with you, what's the best way for them to get in touch with you or at least some ways for them to reach out?
Sara: I am on LinkedIn and I don't check it super often, but I do check it. And otherwise, I think my email details are also through the editor context for food quality and preference.
John: I see and then someone could also volunteer to review papers. Wonderful. Okay, Sara, this has been great. So, thank you so much for being on the show.
Sara: Thank you. Pleasure, John.
John: Okay, that's it. Hope you enjoyed this conversation. If you did, please help us grow our audience by telling your friend about AigoraCast and leaving us a positive review on iTunes. Thanks
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!