Maria Keckler - Human Skills
- Matthew Saweikis
- Jul 1
- 24 min read
Welcome to "AigoraCast", conversations with industry experts on how new technologies are transforming sensory and consumer science!
AigoraCast is available on Apple Podcasts, Spotify, Podcast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!
Hi, I’m Dr. John Ennis, co-founder of Aigora and host of AigoraCast. This episode I enjoyed speaking with Dr. Maria Keckler, co-founder of the EmpathyRx Lab at San Diego State University. Maria is very easy to talk to, and I really enjoyed her ideas on the importance of human skills in the age of AI. I hope you enjoy this conversation as much as I did, and remember to subscribe to AigoraCast to hear more conversations like this one in the future!
Guest: Dr. Maria Keckler, EmpathyRx Lab, SDSU
Dr. Maria Keckler is a keynote speaker, executive advisor, and strategist, known for her expertise in leadership development amidst the rise of AI. She holds a Ph.D. in Education from Claremont Graduate University, where she focused on the interplay of neuroscience, education, and behavior change. Dr. Keckler excels at bridging academic research with practical applications, empowering leaders to leverage human strengths.
Her commitment to empathetic leadership is evident in her roles as co-founder of the Healthcare EmpathyRX Lab at SDSU and editor of The Structural Skills Project. Dr. Keckler is also the author of "Bridge Builders: How Superb Communicators Get What They Want" and has been honored as one of San Diego's 50 Most Influential Latino Leaders.
Links

AigoraCast is available on Apple Podcasts, Spotify, Podcast Republic, Pandora, and Amazon Music. Remember to subscribe, and please leave a positive review if you like what you hear!
Transcript (Semi-automated, forgive typos!)
(Podcast Intro Music)
Announcer: Welcome to Aigoracast. Conversations with industry experts on how new technologies are impacting sensory and consumer science.
(Music Fades)
John Ennis: Okay, welcome back everyone to another episode of Aigoracast. Today I'm very happy to have my friend Dr. Maria Keckler on the show. Dr. Maria Keckler is a keynote speaker, executive advisor, and strategist known for her expertise in leadership development amidst the rise of AI. She holds a PhD in education from Claremont Graduate University, where she focused on the interplay of neuroscience, education, and behavior change. Dr. Keckler excels at bridging academic research with practical applications, empowering leaders to leverage human strengths. Her commitment to empathetic leadership is evident in her roles as co-founder of the Healthcare Empathy RX Lab at SDSU and editor of The Structural Skills Project. Dr. Keckler is also the author of Bridge Builders: How Superb Communicators Get What They Want and has been honored as one of San Diego's 50 most influential Latino leaders. So, Maria, welcome to the show.
Maria Keckler: Thank you, John. Good to be here.
John Ennis: Yeah, great. So, maybe we should talk a little bit about, uh, how we got to know each other, because I think it's an interesting story. So maybe you want to share a little bit about your experience and then we can talk about, you know, how we ended up on the show together here.
Maria Keckler: Yeah, of course. I, um, have been playing around with AI for some time. I love experimenting as soon as something comes out. And, uh, one of those weeks, I had uploaded a draft of an article I had finished, and I asked it to give a proofread and edit. And it actually produced something that had completely stripped my voice. It changed a number of words, and it was as if I had disappeared from the page. And I just posted a little sample on LinkedIn and I said, you know, more proof that humans are still relevant today. We need humans. And, uh, you responded to that and you said, "Keep on, keep on practicing, you'll get better."
John Ennis: (Laughs)
Maria Keckler: Okay, human error, got it.
John Ennis: Yeah, yeah, yeah, it's funny. Well, yeah. I mean, I meet a lot of people that way. And, um, sometimes people at my company tell me, "John, you shouldn't be so, you know, disagreeable on LinkedIn." But actually, you know, there's a lot more interesting things that come out of disagreeing with people and then talking about it than, um, than there are if it's just, you know, AI-generated, "Yes, great post."
Maria Keckler: Uh-huh.
John Ennis: So, no, it's, but you have a valid point, though. I think, um, okay, we'll go into this, then we'll go into your background a little bit. We, I do think we need to bring the audience up to speed on that. But what I think is really interesting about AI is that more than maybe any tool in human history, the user matters. The details of the user and how it's used, and in ways that are subtle, that I think... there's a lot of tools where, suppose you're talking about like a power saw or something, right? I don't know how to use a power saw, and I know I don't know how to use it. And so when I use it, it's a disaster. Okay, fine. But I think everybody feels like they know how to talk, right? And so they talk to the AI and they don't get the results they want and they think there's something wrong with the machine. And I think that's a really interesting fact. So maybe you talk a little bit about your experiences interacting with AI and how you've, um, you know, kind of experienced this effect where you get different results depending on how you interact with it.
Maria Keckler: No, absolutely. You know, it takes me back to, back in the day, um, this is a couple of decades ago, probably longer. I was trained as a technical writer at the time, and my role was really to work with a lot of scientists and engineers and taking highly technical information and translating it into very simple, readable text copy. And I think that I have found that I need to go back to that mindset, where AI is going to give you an output based on the clarity that you give it, right? And so, often times when we have a conversation, we go around these, uh, rabbit trails and we, you know, uh, explain things in not the clearest way. AI really needs just simple, clear instructions.
John Ennis: Right.
Maria Keckler: And, uh, and I'm finding out that getting rid of any fluff, getting rid of any, uh, superficial information, just get down to exactly what you want, give it context. And the, the better I get at it, the better the outcome is.
John Ennis: Right. That's right. That's a hot topic. You hear people talk about context engineering. You know, it was prompt engineering and then people kind of figured that out. Now it's into context. It's like, okay, you ask clearly for what you want, but if the AI doesn't have the information to help you, then it can't help you.
Maria Keckler: I mean, even just giving it a role. I mean, something as simple as, "You are my copy editor."
John Ennis: Right. Right.
Maria Keckler: Right? I, you know, the, the, the copy, uh, is, is complete. It just needs proofreading. Do not replace any, any keywords. Point out in a separate list, you know, issues with clarity, but don't make substantial changes.
John Ennis: Right. Exactly. Keep my voice. That's a big one. Keep my voice.
Maria Keckler: Exactly. And so I have learned, and, uh, and the more I give it clarity, I give it a role, the expectations, uh, the faster, you know, the collaboration is.
John Ennis: Yeah, definitely. Okay. Well, maybe now let's take a step back and talk about your kind of journey and how you got here, because, you know, most of our listeners are going to be sensory and consumer scientists, and I think that there is actually a lot of overlap between, uh, the work you've done on the more kind of cognitive neuroscience side and some of your background, you know, in, uh, behavioral research and consumer science. And so that's why I wanted to have you on the show. I thought you'd have a lot of insights that would be valuable to consumer scientists. And I think that often times, there's more to learn from the adjacent fields than, like, you probably know your own field pretty well. But interesting ideas are to be found in the adjacent fields. So maybe you could talk about your journey and, you know, how did we eventually end up here on the show together?
Maria Keckler: Yeah, for sure. You know, I came into my work really through communication. I, um, my background, my undergrad and my master's degree were in English. And I, uh, have a background in linguistics. And, uh, when I was, um, you know, just beginning my career, I found, you know, that I was also very entrepreneurial. And so with the help of a mentor, I started my first company, and it was a company focused on technical communication. And so, as I mentioned before, it was really helping very technical, uh, fields, technical, um, leaders, uh, produce, you know, whether it be in healthcare or engineering or science, uh, content that would be readable for just lay audiences. And, um, and then over time, I began to transition into helping scientists and, and executive leaders develop presentation skills. So, taking, you know, content that is highly technical, very, um, you know, heavy in content, and really create a story that could be absorbed by investors, by internal audiences, by people they wanted to influence.
And that was really interesting. And when I was doing that, I was doing a lot of reading on the science of, um, on the neuroscience of storytelling. And I came across the work of who eventually became my doctoral advisor at Claremont, uh, Paul Zak. And Paul Zak, I mean, you can do a simple Google search and you'll find that he is one of the pioneers that really looked at the effects of storytelling on the brain and specifically how, um, when we engage in storytelling, and we tell effective stories—and the keyword there is effective—um, we actually release oxytocin. And oxytocin, right, responsible for, you know, building connection, trust, empathy. And so I used a lot of that background research to, uh, come and to, whether it be in healthcare or science or engineering, and work with teams and helping them translate their content into story-driven communication.
And as I moved through, through, through a lot of that work, I found that, uh, there were some gaps in the research in healthcare specifically, because when healthcare educators or doctors or nurses were dealing with, with patients, Right. um, there was a lack of empathy, and there's a lot of literature out there that shows that over the course of the medical education journey, or even in practice, empathy continues to decline, just continually. So imagine you have a nursing student or medical students that come into the profession because they want to help people. They come high in empathy. And by the time we graduate them, their, their empathy has plummeted. And then they go into a field where empathy looks like inefficiency because we focus on metrics of efficiency. And so that empathy plus burnout and then desensitizing ourselves from our human connection to our patients, then it continues to affect that, that outcome, even for patients.
And so I, uh, wanted to come in, I returned to school, uh, to get my PhD because I wanted to use the science of storytelling to be able to preserve empathy in healthcare. Um, so that's, you know, so there's a lot of things happening in there, but now, remember, back in the day when I was, uh, focused on, on technical communication, I was just, uh, fascinated by, uh, technology, by science. I was working with a lot of very smart people, and as I translated their, their technical information, I was just curious and interested and learning a lot from them. And so naturally, as AI began to show up in the scene, I'm always the first one. I want to know more, I want to read all about it, I want to test it. And so I'm always, uh, an early user, if you will. And, uh, and in the process, um, here's really where all these, these, uh, different, seemingly different points of my career, whether it be technical communication or empathy or science or healthcare, you know, began to come together because I realized that more than ever, in this world of AI, we need to have what we naturally call soft skills, right? The human skills, you know, empathy, communication, collaboration, uh, others like strategic curiosity or proactive foresight. And so I was having a conversation with one of my mentees and I said, "You know, we need to just stop calling them soft skills." And I decided I was going to publish an article called "Stop Calling Them Soft Skills," and I published it on Medium. And it just went viral, and I began to have conversations with people all around the world about the fact that there is a new awareness, whether, whether you are a, um, user experience designer, let's say, Right. or you are someone working in the technical aspects of AI, Right. that the more AI becomes useful to us, is no longer our differentiator. Right? The technical skills are not our differentiator because a lot of those skills can now be delegated to AI. And so what is going to set us apart is really those human skills, our ability to build trust with others, our ability to collaborate, to present information, to tell stories, to create human connection. And so that's where I find myself now, and it's been wonderful to see that it's a message that resonates not only in the United States, but I mean, in within two weeks, I was talking to people in the UK and South Africa, Kenya, uh, Glasgow, Australia, France, Spain, Mexico, and there seems to be a consensus that in order for us to, uh, really become assets within our organizations, or whether we are business owners, entrepreneurs, the more we invest both in the technical AI skills alongside with our human skills, that is really what is going to make us different and much more useful in this next generation.
John Ennis: Yeah, a lot of that really resonates with me. You know, I, I, um, did a long post, actually, I've written an article that I'll be publishing soon on, um, the future of sensory and how it, I think a big difference between humans and machines is that humans are evolved. And I think that a lot of the storytelling comes from down to the fact that evolutionarily, storytelling was very useful for humans to organize. You know, one of the big differences between humans and other animals is our ability to organize through stories, you know, that we can understand and organize through stories. I'm preaching to the choir here on this, but, um, yeah. And I think that, um, in sensory science, there's this big question of what's going to happen with AI? Is just everything going to get automated and there won't be any jobs left? And I think it's the opposite. I think that life is lived through the senses. It's, it's, and humans have evolved to perceive things in a way that machines don't perceive them. You know, they can kind of simulate it, but they don't. And I really like that distinction between kind of human and maybe technical or something like that, instead of like soft and hard. I think that's really interesting. Uh, and I think that as far as sensory consumer science goes, I think understanding the human experience is going to be really important. And the machine can never understand it. It doesn't feel it, you know, it might be able to feed it back to you, might be able to pretend, but at the end of the day, it doesn't actually feel what a human feels. So, maybe you can talk about, what are some of the lessons you've learned on this journey as you've been experimenting with different AI tools and, and what's your, what's your experience, let me ask you, what's been your human experience as a user of AI?
Maria Keckler: Yeah, I, I think, um, having these conversations, I mean, you are one of them, right? I met you through a conversation that I had about AI and all the people that I've spoken around the world and even some of the, the talks that I do on what I call the structural skills, which is my rebrand of, um, soft skills. Um, I am finding that individuals have inherently known all along that they needed to invest in human skills that are effective. So, relational skills. How do I relate to one another? I mean, things that work not only at work, in your career, but also in your home, with your spouse, with your kid. Uh, I should be better at communicating my ideas, in presentations, or telling better stories, or, um, listening. And now I sense that there is an understanding that there's even more of an urgency to actually invest in them because in the past, if I could focus on what I'm good at, the technical aspect of my career, for example, then, I mean, those type of skills, I know they're useful, I know they're important, but I can put them off. They're optional today because I really, um, find myself being an expert in my field, and that's what gives me value. That, that's what's given me identity. Now I sense that there is a shift where more people realize that a key asset today is actually investing in your human skills.
And as we even develop the nuances of AI, I mean, the, you know, the work that you do, uh, you still have to work with teams. We still have to address some of the moral concerns that come up with this great innovation. And all of that, we cannot delegate to a machine. We have to bring, you know, we are going to be the benefactors or the ones that bear the brunt of any, uh, lack of discipline as we develop these amazing tools. And, and we cannot afford just, um, sitting in a cubicle all day long coding and not relating and having conversations with the very people who are going to be impacted.
John Ennis: No, I think that's all right. I, I think, well, the more that I've worked with AI, actually, the less that I am worried that AI is just going to replace all the all the jobs and humans will have nothing to do. I think that's just incorrect. I think that only a human can give value to things, that a human, you know, even if you have a metric, okay, the machine can optimize to a metric. But what's important, you know, what is it that matters to other humans? I think only a human can really guide a machine like that. So, but I do think it's interesting what you said, that if someone has had a job as say a coder, and they sit in their, their cubicle and they code all day long, it's, it's going to be a tough time. They are going to have to invest in their human skills. So what are some of the advice, what's some of the advice that you give about, in terms of how to invest in your human skills and how someone's a developer now, and they maybe they had a comfortable job and they're looking at that world going away. They're not going to be able to just to sit and write code anymore. They're going to have to level up somehow. What would, what advice would you give to that person?
Maria Keckler: I mean, number one, you know, I wrote an article, um, called "Strategic Curiosity," right? And so, developing an openness to the possibility that you have to think differently, that you don't know what you don't know. Um, I, I developed a program for a client many years ago, a large corporation. It was called You, Inc. And it was really about developing an entrepreneurial mindset, whether you work inside an organization or outside. Now, for those of us who already have an entrepreneurial mindset and are constantly thinking about how do we export, in my case, how do I export my research from the lab to, to, um, the market so that it actually have a greater impact? I mean, we already know what that looks like. We don't even have to think in terms of, "What does it look like to be an entrepreneur?" You know, it's just within us. But a lot of people could really benefit from understanding what that looks like. How do you, um, um, upscale your skills, expand them beyond just a very, uh, linear, you know, way of thinking or operating? Uh, you have to understand, how do you market yourself? How do you market the ideas that you have so that you can become an asset to their organization? How do you actually integrate, you know, your expertise across, uh, multidisciplinary lenses, you know, like you said, you know, talking to people that are not just working within the lane that you operate in. You have to be open to talking to people in other disciplines that bring totally different ideas that you probably were not interested before. And, uh, and the more you develop that curiosity, then it kind of leads to the other skill that I also wrote about, which is proactive foresight. Then you begin to see possibilities ahead. In other words, you're not just focused on what you're coding here in front of you, not just in the task at hand, but now you're thinking, okay, you know, like a chess player. I used to play, you know, I was a chess player as a kid. And so you're always thinking five steps ahead, right? And even though you have to pivot at any given time, given the conditions or the changes, uh, within your organization or the innovation that is happening, you are constantly looking beyond just this moment. And now you have to speak up. You want to raise your hand. You want to be a contributor that helps other people understand that there are more, there's more at stake than what we're looking at today.
John Ennis: Mm. Yeah, it's really interesting. AI is such a double-edged sword, I think, you know, with the whole, on the one hand, it's very scary to see everything get automated. You know, people with comfortable jobs, get automated. But on the other hand, I really believe every person is unique and every person is meant to do some unique thing. There's some calling that everybody has. And I do think that this is a good time for people to think about, you know, what is that that I'm supposed to be doing and to lean into it and like, what makes them unique so they can express themselves versus, you know, you're writing code, it's the same as anybody else. I mean, it's not going to be the same as what anybody else would write, but it's very similar, whereas if you're out there, like you are at the intersection of multiple fields, you know, you're the, you're totally right about healthcare lacking empathy. I'm very glad you're doing that work, actually. You know, my wife is doing, she does research with children with intellectual disabilities, and she's using AI all the time to help her do that. So she can do her niche better. You know, I'm interested in the technical side of, like basically bringing more technology into sensory consumer science, so I can lean into that. But I think it is a little bit scary for people to have to think, "Okay, what makes me different, unique?" And I, and I do believe everybody has something. So, so, but let's take a specific example. Let's come back to that coder who's just been writing code all day long. What are some actual steps they could take to get outside the, the comfortable box they've been in and start to grow as a person and, and benefit from all these changes that are happening?
Maria Keckler: I mean, you can use the tools, right? You can think about, so I, I forget who, it was a podcast or maybe a YouTube, um, it was one of those thought leaders that said, "Okay, spend 20, take 20, 20, 40 minutes with your AI tool and just do a brain dump of who you are, your skills, your beliefs." I mean, just do a brain dump. And then you can ask it, "You know, what are my blind spots, right? How do I become relevant in this moment and what are my blind spots? And give me three suggestions of things that I can learn to, uh, elevate my value as time changes," right? So I mean, you can start there. I mean, it's almost like having a, a life coach help you, uh, understand how you can expand your thinking. Uh, I mean, we have so much learning available to us. I, I subscribe to, uh, MasterClass, which is a platform where you can learn from the, the world's best. And then there's another version is called BBC Maestro. And you have the world's best in every field teaching you about things that you probably would never learn otherwise. I mean, I've learned about global diplomacy. Uh, I've learned about, you know, different science, um, you know, uh, areas that I, you know, wouldn't read, but it's just interesting. And and the more, I think, that you become a learner, uh, the more you are going to have ideas. You have to invest in your crea— I'm sorry, in your creativity. That's another thing. I've been talking a lot about, do not delegate your creativity to AI. Right. Do not do that. You know, there's that term that, uh, evolved from an MIT study, which is "cognitive debt," right? That the more that we, what they looked at is how, um, is the brain operating when, let's say, you have AI produce all your first drafts of your articles, let's say, versus someone that has to do the first drafts and maybe does Google searches or, or, uh, research searches on their own. And you can see clearly how the one that delegates all that initial creativity, brainstorming, and really, uh, wrestling with ideas is creating, you know, a laciness in your brain. Less areas of your brain are firing up, and over time, that's what they're saying, you develop this cognitive debt where you are not able to be as creative, as innovative. And, and that actually reading that study, it really scared me because I have always, um, prided myself in being one of the most creative people in the room, and part of it is because I'm very, uh, multidisciplinary and I read from many fields. And so I thought, "Okay, I don't want to lose that. That is an asset for me." And so I need to nurture that. And so I refuse to delegate my creativity to AI. But, but you can start by asking AI, "You know, give me three things that I should learn over the next six months," right? And and again, it could be completely unrelated to your field, but it will not be a waste of time.
John Ennis: Right. Well, I'm going to do that right after this call. That's a good idea. What do I need to, that's very good. No, it's interesting. Well, that brings us, I think, to probably our last topic, which is education, because I know we had an interesting discussion about this in the, you know, our first call. Um, about the impact of AI on education. And I do think there is a big danger that, like you and I grew up, you know, I'm probably older than you, we don't need to get into that. But we grew up in the before times where I feel very lucky that I'm still, you know, capable and, you know, I still have my wits about me and all this, and I learned how to work without AI, and now I can leverage it, and it's a great tool. But I look at my son, for example, you know, he's nine, or my daughter's five. She just talks to the air and stuff happens like at home with Alexa. And I just worry, "Okay, are they just going to be mentally lazy because they never have to think about anything?" Um, so what would be your advice for, say, educators on how to incorporate AI into, you know, because AI is a fact of life, the children have to learn how to use it. Um, how do you think AI can help strengthen education instead of, you know, what, basically undermining it?
Maria Keckler: Yeah. You know, I was reminded recently, I, um, started, uh, teaching at the college level when social media was just beginning to emerge. And so I was developing a lot of curriculum, but also I was developing professional development curriculum at the time, and there was a term, "digital natives." You may remember that, right? So we were always talking about the digital natives that we're educating and how we needed to show up as educators in the classroom, you know, to, to meet the needs of these students who, uh, the first thing that they, you give them an iPad and they already know how to scroll. They already know how to open an app. Yeah. And, um, and so now we have the same thing. We are dealing now with AI natives that they will not know a time when they didn't have an AI tool, right? And so we, the number first thing that we need to do, I believe as educators, is to stop complaining that our students are cheating. I mean, the longer we stay in that space of having frustration, of adopt—adopting the role of the police, that I'm going to police my students to see if they're cheating. And I know there are a lot of tools that are able to catch, and I, and there's, there's a problem with some of them because there are a lot of false positives. Right. You know, but, but the longer that we stay in that space, we actually are not entering into a place of creativity as educators. Instead of, you know, I think that as educators, we need to put together think tanks where we share our best practices of how to create innovative, uh, lessons. And you know what, I think that I, I actually wrote an article about that because, uh, I think that this is the disruption and the reckoning that education needed, because if you look at innovation, just look at the car, right, from the horse and buggy to now to your Tesla, right? And that there has been a visible, radical innovation in that. Look at the classroom of, I don't know, the, you know, last century to today. I mean, it's virtually the same, pretty much, except for now we have chairs that have wheels and we can move them and create circles, right? But I mean, it's very, and we have a projector and we can do that. But, but it's still very much focused on the sage on the stage and you are, uh, in a passive role as a student.
We have been able to get away with that for a long time and we cannot do that anymore. And so I was having a conversation with this, uh, colleague, actually, one of the people I talked in the UK, and he, uh, shared an example of how, uh, one of his colleagues, a writing teacher, uh, he just, he says, "I'm going to assume that they're going to try to use AI." And so he, in his mind, he wants students to learn to be curious and to understand the material. And so he collects the assignments electronically and says, shows up the next day to class and says, "Great, got your assignments. You'll get your grades later. But before coming to the class, he actually put those assignments through AI and said, 'Generate a five-question quiz for each one of those essays.'" And so then he came, showed up to class and he says, "Great, okay," and he distributed the quizzes on their own papers. Yeah, personalized. And then, and so it's like, "Okay, you have to do that." And he either did it, uh, you know, in writing or orally. Now, that is just a innovation right there because he is now quizzing them on the material. He wants to know if they really have thought, um, you know, deeply about the material. That is just one example. We need to have more of those think tanks. How do I create immersive experiences for my students? And and to be honest with you, I mean, I go back to the first college class that I taught. Uh, it was technical writing for engineers. And they, and the students even at that time, they had to create a new, a new technology. The first thing they had to do is bring their scrap technology from their closets or drawers, and they were going to create a prototype of a technology that didn't exist, that solved a real problem. And then, you know, they, they had to collaborate, they had to ideate, they had to do design thinking, and then they had to build a product, and they had to write a manual about it. I mean, so that, even that exercise could be adapted to today and integrate AI as one of your collaborators to create an innovation. And so I am excited, I think, about the possibilities, and I would love to see more educators really start thinking about how do they get to change the way they teach forever. And they no longer need to provide all the content. The students have it at their fingertips. Our role now is to create highly immersive, practical, project-based experiences so our students actually leave their classrooms prepared for the field.
John Ennis: Yeah, that's really fascinating. Okay, great. Well, Maria, I could talk to you forever. You're a very interesting person, and I would encourage people to, uh, reach out to you. But we always close with advice for young people. So what would be your advice to someone, say, you know, 20, 25, graduating college, going out into the world? What would be your advice to them right now?
Maria Keckler: Yeah, I would say the same thing that I that, that we spoke in this podcast. You know, go ask AI, "Give me three things that I should learn right away, uh, that will set me apart in AI." You know, I bet you one of those things is going to be public speaking skills. Right. You know, telling great stories, uh, you know, I would become, you know, if you have been someone that has been focused very, um, uh, in a siloed type of program where you only focus, let's say, in engineering from a particular lens, you know, try to become multidisciplinary. You know, and I would say, and for me, I consider myself transdisciplinary because transdisciplinary, um, thinking really expands into an entrepreneurial mindset as well. So it's not just the disciplines, but how do they actually operate within a commerce type of environment. Um, I think that there, there are so many exciting possibilities. And then I will also just say that we have the opportunity to reshape AI, but we have to remain highly human. Uh, we have a lot of biases, right? And AI will just adopt the same biases. And so investing in self-awareness, emotional intelligence will ultimately help the way that we create these products for the future.
John Ennis: Hm. Yeah, fascinating. Now, how can, I mean, I have to resist asking follow up questions though because we're out of time. So, what, uh, how can people get in touch with you, Maria?
Maria Keckler: Yes, uh, DrMariaKeckler.com is my website, and I am very active in on LinkedIn, and then also, I, uh, publish, I have a publication on Medium called The Structural Skills Project, where a lot of the articles that I mentioned are housed.
John Ennis: Okay, fascinating. Okay, well, wonderful. Well, thanks a lot, Maria. It's been great having you on the show.
Maria Keckler: Thank you. Thank you so much.
(Podcast Outro Music)
Announcer: Okay, that's it. Hope you enjoyed this conversation. If you did, please help us grow our audience by telling a friend about Aigoracast and leaving us a positive review on iTunes. And if you'd like to learn more about Aigora, please visit us at www.aigora.com. Thanks.
AigoraCast Episode Human Skills, a conversation with Maria Keckler, Dr. Maria Keckler, EmpathyRx Lab, SDSU, is now live!
Visit your favorite platform and remember to subscribe:
Apple Podcasts: https://apple.co/2LZcgQp
Stitcher: https://bit.ly/2M1ivmW
Google Podcasts: https://bit.ly/338FRwK
Spotify: https://spoti.fi/2MpP64N
Please leave a positive review if you like what you hear!
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!
Comments