Back to episodes

Episode 93

Navigating AI Resistance: Overcoming Fears and Misconceptions

Liz Zaborowska, Founder & CEO of Bhava Communications and Spring Catalyst, explores the common fears and misconceptions that lead to resistance against AI adoption. She emphasizes the importance of proper training and support, ensuring that team members feel confident and secure in their use of AI tools.

Transcript

Luke: [00:00:00] If you’re listening to a new episode of The Brave Technologist, and this one features Liz Ska, who’s the serial entrepreneur and current CEO of Spring Catalyst, where she helps teams optimize performance and navigate AI adoption. Liz is also the founder of Baba Communications and award-winning marketing, PR and social media agency that’s helped hundreds of enterprise and technology companies stand out as category leaders.

Luke: In this episode, we discussed why AI resistance is prevalent, the underlying fears and misconceptions that often accompany new technology, principles of responsible AI use, and the policies that support it. How AI tools can complement rather than replace human creativity along with maintaining authenticity and AI generated content.

Luke: And now for this week’s episode, that’s a brave technologist.

Luke: Liz, welcome to the Brave Technologist. How are you doing today?

Liz: I’m doing great. Thanks for having me.

Luke: Yeah, I’ve been looking forward to this one. a lot of stuff we can talk about [00:01:00] here today. you know, I know that you’re, you’re, you’re working kind of at Spring Catalyst on AI adoption. You know, what are the biggest people and process challenges that you see teams facing when trying to adopt ai?

Liz: It’s kind of all over the map. But the biggest ones I would say is resistance to to trying new things, even as people are trying to do their jobs and their plates are already really full. So a lot of times the managers will say, Hey, use. These tools are using these tools, but there isn’t proper training.

Liz: There isn’t a sense of safety around, if I use this tool, is my work going to be valued? So having policies and procedures and a sense of, Hey, it’s really good if you do use this. And I would say the other problem is, is that, there’s silos that people are using these tools in. So one person might be really good at it, the other person’s kind of hesitant.

Liz: And the way to get over that is to create a sharing environment. So all of us on our team, ASPR [00:02:00] Catalyst and Bava Communications, we, if we discover a really good way to use a tool or a good set of prompts or, or whatever it happens to be, right? Like we very actively share it with everybody else, and we’re trying to teach our clients to do the same kind of thing.

Luke: Yeah, it seems like, I mean, there’s a couple things here. You know, like, the form factor for ai, like in, in practical use is very kind of like natural conversational, and it seems like a lot of those things that people have been putting their guard up. For, you know, on OPSEC and things like that, that your IT manager’s been trying to train you on?

Luke: They, they get disarmed in this process, right? And, people kind of lose sight of like, oh, wow, if I put this information in, you know, it’s pretty sensitive. But I’m just in the middle of a conversation and it just kind of happened. You know, those types of things. Mm-hmm. I totally relate with your point.

Luke: I think it’s just a whole new way of working, kind of, and then everybody in different, different, like, places with it, right? Mm-hmm.

Liz: Totally there is a [00:03:00] little bit of the, Hey, if I use this tool, is it going to, you know, make me obsolete? Mm-hmm. Am I gonna teach the tool how to do what I’m doing?

Liz: And making your teams feel like, Hey, that’s not what we’re trying to do here. We’re trying to be more efficient be. Better at the things that we already do. You, there absolutely is a time savings and there absolutely is a productivity thing, but it’s not like you get that time back. Right. Right.

Liz: Whereas this is like, oh, now I’m gonna be required to be do more, to do more. But you know, one of the things I talk about a lot is I’m writing this article about how we’re in the 1920s of the 21st century.

Luke: Mm.

Liz: it doesn’t matter right now if you’re 70 years old or or 20 years old, but all of us think that like the 1920s, those are the olden days.

Liz: And you know, there’s, there was horses still on the streets and there was cars, and some people had telephones, some people had plumbing. Right? But, but not everybody did. And so. life was slower. But you know, if you take that analogy to now, it’s not like, okay, well don’t use the tool. Like don’t use the plumbing, don’t use the phone, don’t.

Liz: Right. You know what I mean? [00:04:00] Right. So, yeah. I mean, yeah, everything’s accelerating, but we have to embrace these tools because if you don’t, you’re gonna be behind. So we, we need to overcome these obstacles.

Luke: Yeah, absolutely. And, and would you say there’s kind of, is that the biggest resistance or, fear you see from talking to leaders of teams?

Luke: Is is around like this idea of, oh, this thing might replace me. Or like, are there any other areas you’re seeing concern from, from people on that you talk to?

Liz: there’s the concern of like, that might replace me. There’s the concern of you’re requiring me to do more with less time.

Liz: And especially if somebody hasn’t figured out or been taught how to adopt the tools. It actually ends up taking them more time sometimes to do things. Mm-hmm. Then there’s the, fear that the quality of the work isn’t going to be as high. That it’s just gonna be AI slob, and that’s real. You know, that’s, yeah.

Liz: Yeah. Is the whole thing like, is this making us smarter? Is it making us dumber? Yeah.

Luke: Yeah. Something we can. It is a great, great kind of point to lead into because it seems like too, where, you [00:05:00] know, things start to look the same or, or similar or less authentic and, are we

Luke: just curating slop? where, where’s that line Right?

Luke: Between, you know, this thing’s actually making me more productive and, more efficient versus like. No, we’re just turning into drones, you know, like, like mm-hmm. What’s your, what, what are your feelings on that? because you’re, around people a lot that are using these things.

Luke: is there a clear line between like, okay, this is just kind of talk down mandated hype at this point, versus like real kind of product fit with these things, or, or, love to get your perspective on that.

Liz: I don’t think it’s just hype, I think mm-hmm.

Liz: People are you know, these hype cycles are, are common. So people are at that point where they’re saying, this has to be hype. It’s not hype, right. I mean mm-hmm. It is makingcertain kinds of work better, but I think we have a responsibility, there’s a new layer of responsibility around emphasizing quality and authenticity.

Liz: And recognizing quality and authenticity and not saying like, oh yeah, I just did the thing. I’m done. I’m gonna [00:06:00] like get it off my plate really fast. And whether it’s good or not is now your problem. Right? Whoever they’re giving it to. So I think we have to absolutely be reinforcing with our teams that it’s, yes, use these tools, it’s your also your responsibility as an individual.

Liz: To make sure that the work that you are providing isn’t rote, isn’t just, you know, something that sounds like Chad, GBT. Like, use it as a jumping off point. Use it to take things that you’ve created and improve them or come up with an idea that maybe you hadn’t come up with. And if you’re not doing that, like honestly, you know, at this point, in 2025, like you’re kind of missing out, right?

Liz: Like, yeah, you really should be doing that. But at the same time, know when to stop. Like if it’s going down some kind of rabbit hole that it’s starting to really sound bad, just stop, use your brain,

Luke: you know? Right, right. No, that’s a great point. I mean, I think, you know, because. [00:07:00] Whether it’s like these hallucinations with things or, or just, fact checking, right?

Luke: Like, I think that those things, if we are getting stupid or, or dumber or however we wanna frame it, I, I, I feel like it’s less around like maybe critical thought, not, not, not so much and more around just like good practices of. Fact checking like these things. Mm-hmm. These, these prompts will still output, a couple different variants on information.

Luke: You know, from what I see at least, and there’s, there’s still this like, need to like, make sure the facts are right, you know, and, and, and to do, make sure your sources are good and, and all of that. That seems like it’ll continue to be in demand. I don’t know what you, what you think on that front.

Luke: A

Liz: hundred percent. So absolutely fact check. So funny enough, you can use AI to fact check stuff that you’ve created, right? Mm-hmm. You’re looking for statistics or whatever, but then fact check the fact checking and, and if it’s just spitting stuff out, Definitely you have to fact check. Like when we use it for helping analyze survey data, for example, right?

Liz: Mm-hmm. [00:08:00] I mean, nine times outta 10, there is a hallucination in there somewhere. Mm-hmm. And so you cannot, I mean, whether you use perplexity, Gemini, Claude Chachi, pt, any of them. And then the other thing is to compare the tools against each other. Mm-hmm. And then if you’re using. Confidential data of some sort.

Liz: Make sure you’re using paid subscriptions. Make sure your team is using paid subscriptions. Make sure they understand what can and can’t go into the various models because otherwise you can really get yourself in trouble and, and or be training these models and stuff that’s actually proprietary to you.

Luke: Right. No, that’s a great point. And I think it’s something that people like, it’s one of those like OPSEC things that people don’t realize. And, and I think that unfortunately it seems like, you know, even, even policy and IT folks that a lot of companies aren’t spending enough time looking at the policies that the software has, right?

Luke: Like where, hey, you know, who’s hosting it? Is it us or is it. The company, you know, how can that get shared? Because we even, like, it’s funny, I’ll be on [00:09:00] calls sometimes and people will pull weird data out about our financials or something that are definitely not public. Mm-hmm. And, and also inaccurate.

Luke: Right. And it turns out the stuff’s been in there. Somebody put it in there at some point and, people are taking it as facts. It’s kind of mm-hmm. Gnarly, you know?

Liz: Yep. I mean, I’ve seen that in Crunchbase when like. When companies have incorrect information in there about them, who knows who put it in?

Liz: There could have been an employee, could have been a competitor, could have been whatever. Totally. Right? And the ais are taking this stuff as fact. So, yeah, super useful tools. Super early, super dangerous if you’re not paying attention. And I think it, emphasizes the fact that it there’s a requirement for us as people in business to.

Liz: To let the people that have expertise have a front line to making sure that they’re reviewing what’s going out before it goes out into the world, because otherwise it’s this non-virtuous circle with the wrong stuff is going out and then it’s feeding the eyes and all this is moving super, super fast.

Liz: So it’s like we can’t stick our head in the sand and not use it. But we gotta [00:10:00] use it with eyes wide open.

Luke: So, and I think you have a really good point too, about like trying, or whether it’s fact checking or even just like for the work itself, using the different tools, right? Because they are, they’re vastly different in some cases.

Luke: And, and some are much better at, at some things than others. And it, it is interesting too that like, people made coding this kind of like bulletproof case for AI use. I personally have spoken with a lot of like VPs of ITS or DevOps people who are like, yeah, our coders are just spotting hallucinations like crazy too.

Luke: It’s just that there’s not enough of base layer. You’re just gonna end up with these same problems no matter what you’re doing. It’s kind of super interesting, like seeing how this stuff plays out it, you know, practically, right?

Liz: Yeah. You need to know what you’re looking at, and so if we get rid of all the smart people and all the experts inside of companies.

Liz: And people don’t know what they’re looking at and they’ve been growing up just using chat bots. Mm-hmm. I mean, it really like the, it can really devolve pretty quickly. So I think we’re in an interesting inflection point, whether you’re talking about content [00:11:00] or coding or, you know, any kind of aspect of business, right?

Liz: Like, let’s be using these things because otherwise other countries or other companies are gonna get ahead of us. Mm-hmm. At the same time, be really mindful and be really smart about how we’re doing.

Luke: Yeah. What, what, what type of mindset do you think will matter most? Or, or, or, you know, based on what you’re seeing, and we’ve touched on some of that, right?

Luke: But like, is there an ideal kind of mindset to, to have in approaching these tools in the workflow from what you’re seeing

Liz: be a lifetime learner? I think that’s always been a, you know, for all of our companies, that’s been a, a really important criterion of how we hire as people who love to be lifetime learners.

Liz: I am. Mm-hmm. I’d rather be honestly learning a new app or playing with something than just sitting around watching television. Not that there’s nothing wrong with watching television but regardless of who you are, make sure that you have a, a bias towards trying and a bias towards action.

Liz: Understanding how the, there’s a Cambrian explosion of these tools, right? So like, [00:12:00] don’t be like, oh, I just used chat pt and that’s enough. Right? I mean, try all the various permutations. Like we use beautiful AI a lot for presentation creation.

Luke: Mm-hmm.

Liz: And we’ve been using it since before they really had a ton of AI features.

Liz: That’s a fantastic tool. I mean it, the fact that it has guardrails for design. Like I was a designer for a lot of years. So for me it’s just like, it’s great ‘cause it’s it, it’s a lot faster to get things designed well, but also for the entire team. They can design beautiful things much more quickly that are really impressive to our clients.

Liz: Right. So we have, you know, at our agency we have a lot of people who are PR professionals who are now designing amazing things. And at the same time, it doesn’t preclude us from working with design firms. Like we work with several different design firms and web design firms, and so it’s this.

Liz: It’s this accelerator towards good things if that’s the way you are gearing. Then the other thing in terms of mindset would be. Like a, a carefulness, right? Like a care around what is being produced. A care around [00:13:00] the voice and it being authentic and not just like, not being the purveyor of slop, I think

Liz: be a good person, create good things. Don’t put more bad things into the internet, you know? So, yeah. Yeah. Right. I think those two things are really important.

Luke: I kind of think about it too, like, kind of working with high voltage, right? Like where, you know, have some care.

Luke: Mm-hmm. Like there’s a lot of benefits of like, and amazing power there, but like, yeah. It’s, it can, it can zap you hard if you’re reckless. You know? Like, I, I mean, what do you see kind of as the biggest risks right now? Has AI become more embedded in how we communicate and create?

Liz: The biggest risks, I think, in terms of how we communicate and how we create, are that, that dumbing down thing, right? That we’re not. Engaging our creative muscle and that social emotional part of what makes us human. Mm-hmm. In what we put out into the world. Mm-hmm. [00:14:00] And that’s, you know, we’re mostly talking about business scenarios here, but I think it’s true for art too, and making sure that we, are thoughtful about how we’re using these tools, that we don’t just give up and be like, well, I guess just create it for me. So that’s it, right? I think that just it leads to a kind of one dimensional or two dimensional existence, and I think that’s kind of a, a risk. Another risk would be. That in business, that AI is creating stuff for AI to consume, for AI to then use, to do AI things, and just, it just becomes this automation loop.

Liz: And like the humans are sitting on the side just watching it. Right? Like we’re watching a Right, right. An ant farm. So, so the dehumanization piece I think is. An issue in terms of content and creation. On the flip side though, I mean, there’s so many opportunities to ramp up creation and democratize everything from movie making to design to, to music making.

Liz: I mean, I’m, I’m starting to take DJ [00:15:00] classes nice, nice. Right? And it’s like, it’s accessible, right? Because I could just do it on my computer and, it’s super neat, right? So, and there’s a lot of positives, but the risks are really real and so we have to be paying attention to them.

Luke: That’s one of the same reasons why I was really kind of excited to have you on too, because I feel like there’s a lot of perception out there, especially if you listen to like.

Luke: People in finance or VC or, or whatever, when they talk about this stuff where they’re like, oh, you know, it’s going to, it can make a movie. It can do all these great things. But then, when you talk to people that actually create, you might expect them based on how other people are talking about it, to be pretty fearful.

Luke: Mm-hmm. But it does seem like, I, I, I did design work for a really long time, right? Mm-hmm. Like, and, and, and right in that era when like. Photoshop and all of these, like, you know, computer generated tools start to become, staples and it was never really a fearful thing. It was more like, look, I’ve got a great new way to do something amazing with more or less of my time being spent on mundane things.

Luke: [00:16:00] and when you are giving me that example of how your team uses those creative tools, it’s just like, it feels so much like that era where Oh, we’re going from film to digital and yeah, there will be cases where stuff gets ripped off and people are, you know, okay.

Luke: I can’t, maybe if, if my life was making paints, it’s gonna change now. But like for creative people, the fundamentals are still, the fundamentals are still the fundamentals, right? Like it is as long as you’re, that’s not gonna go away. Like, designers are gonna still design, right?

Luke: Like, what’s your take on that?

Liz: First of all, I agree and there’s also a pendulum swing that happens. Mm-hmm. So everything you said, and you see now young people flocking to record stores ‘cause they’re back. Right. Right. I see Gen Zs buying cameras. They look exactly like the cameras that I had in my twenties and, you know mm-hmm.

Liz: Would take out. With when I was going out. The difference now is that the Bluetooth allows them to take the picture and upload it to their phone instantly. And how cool is that? Right, right. So there’s just, there’s the, these interfaces that are allowing us to create [00:17:00] content and move it between different.

Liz: You know, ways that we can interact with the content and also work on the content really easily. That’s amazing. the fact that that stuff just flies through the sky now and you don’t need to like plug it in or whatever. so I think there’s going to be, and there already is a, a love and a primacy around human content.

Liz: Like the value of that is increasing and that’s probably a good thing, right? And so it’s, so there’s more access. There’s some stuff that’s a lot more digital. There’s stuff, stuff, some stuff that’s a lot more analog. There’s a AI influencer who’s awesome. Her name is Catherine. She’s ask Cat, GPT. You may have seen her on Instagram, right?

Liz: But she just recently sold like a bajillion landline phones that are connected by Bluetooth to her phone so that she doesn’t have to. You know, staring at the phone and so everybody’s like, this is amazing. Wow. But it’s kinda like that camera where it’s like, it’s the analog piece, but also has the di the benefits of the digital.[00:18:00]

Liz: So I think we’re trying to negotiate that whole dynamic right now as a society. Yeah, it’s cool.

Luke: It’s really cool. Like I love that records are coming back. I think it’s

Liz: too, by the way, I’m so mad that like somebody made me throw out my records back in the day. Like I have, oh my gosh, of records. But I have, I do have all my CDs still in storage there go like those are gonna back,

Luke: You know, like.

Luke: Really curious too, you see people throw around how like, AI is kind of this era we’re in now is kind of like the advent of the internet, you know, like I Do you see it that way? Or, or how, how do you kind of, when you’re talking to people about it, how do you frame it for them?

Liz: So, I think it’s definitely a lot more dangerous than the advent of the internet and it has a lot more power and potential.

Liz: Than the advent of the internet. Of course, it’s intimately connected to it. I think it’s like fire, like when humans found fire, we were suddenly able to stay warm. We were able to cook, we were able to clear land. We were able to see in the [00:19:00] dark. I mean, I think AI is like fire for those reasons, but also because just like fire.

Liz: The responsibility for how it plays out in the world is on individuals. It’s on society generally. It’s on countries and regions and Right. So with fire, right? Like, you can’t just be like, oh yeah, we’re just gonna have an open fire in the middle of a forest and everything will be great.

Liz: Right. You know, there, there does have to be. Thought around regulation and thought about personal, also responsibility. You can’t be walking around with an open candle, you know, down the street or something like that. ‘cause someone could get hurt. Sure. So, I think that, and then the transformational aspect of it, right?

Liz: Yeah. I mean, we’re in the 1920s of the 21st century and we used to talk about you know, oh wow, everything is exponentially changing. And for the last 20 years it’s actually exponentially changing. Now. It’s no longer just an analogy and. Where we are in two years, in five years, you know, you and I are gonna look at the world and be like, [00:20:00] whoa.

Liz: Right. Like the flying cars that we were promised the Jetsons cars in, in the two thousands, that didn’t happen. Oh, they’re here now.

Luke: Yeah. No, I, think that’s totally on point. And to that point about it being, like fire and, and all the, the dangers that can kind of come with that.

Luke: You know, you spent a lot of time working with the, the responsible AI Institute. Are there frameworks or principles that you would kind of wish every org that starts to implement this technology could put in place to kind of help temper that or or, you know, put the gloves on or whatever.

Liz: So, the responsible a AI institute is amazing. So it was founded by Manoj Saxena, who was the very first GM of IBM Watson. And I’ve had, oh, the great privilege of working with him for over a decade now. I started working with cognitive Scale, one of his startups, and and he’s been just doing amazing things and he’s seen.

Liz: Firsthand. You know, just from the very first row, Hey, this technology has a ton of potential, but if it’s not explainable and if it’s not harnessed in a responsible way with guardrails, like all [00:21:00] hell’s gonna break loose, right? Mm-hmm. So we, we need to band together to, to make sure that doesn’t happen.

Liz: And so the Institute or Ray, as it’s called, they put together various frameworks and recommendations and best practices that enterprises and, you know, startups as well can bring into their own organizations. To not just wait for regulation to, you know, kinda come from the sky, because as you’ve seen it comes, it goes, it changes, it shifts, right?

Liz: So the, so Ray helps companies map to those regulations as they come and go, but also from a sense of internal responsibility, like, Hey, if we’re gonna have this technology inside of our companies, how can we make sure that you know, based on what, you know, our peers are perhaps doing and, and what the institute has.

Liz: Suggested create some guardrails. So basically I would, I would go to Responsible AI Institute online and, and you know, there’s big companies and little companies that are, are involved there. The other thing that Moshia has done that’s super cool is he has a [00:22:00] commercial venture called Trust Wise.

Liz: Mm-hmm. So it’s an API based software solution that creates a variety of shields for things like. Prompt injections and agent AI and all kinds of things like that, that enterprises are looking to deploy. And so whether you’re building software from scratch that has AI components in it, you can use trust wise to help create the guardrails that map to the policies and frameworks and stuff that makes sense.

Liz: And then also if you’re an enterprise and you just wanna create those guardrails for your business operations. You can use the software for that. So, you know, and, and I’m, I’m heartened because there’s, there’s a whole bunch of companies that are starting to pay attention to that kind of thing too.

Liz: Mm-hmm. And, and it, I think it has to be in software, right? Yeah, totally. Like we need to use AI for good in order to keep up with ai. That’s not so good.

Luke: Right. No, and I think that’s a great point too that you make about regulation where like regulation usually like there in something like AI where, you know, I’m, I keep trying to think [00:23:00] of other examples where you had the kind of like.

Luke: Top down, c team, executive, team down mandate to like implement something that’s still so new to market where it, not necessarily new in general, but like, you know, new in front of your face. Kind of new, right? Like to where Yeah, like, it is one of those things where, doesn’t happen that often.

Luke: And, having kind of these, groups that are out there, it’s really nice to see that and that they’re putting frameworks together because I feel like people look at regulation like a bandaid that, that, you know, or a savior or whatever. And it never really is. It’s just usually whether it’s like how the regulations kind of composed or enforcement, right?

Luke: Like it, it. Always kind of like a little bit behind. So are these you know, is the institute kind of a mix of like academics and practitioners And business people? That’s a kind of like, okay, cool. Yeah, yeah, yeah,

Liz: yeah. It’s all about creating bridges, right? Getting people talking about these things and, caring about them.

Liz: I mean, it’s a little, you know, [00:24:00] to use the fire analogy, right? It’s like the smokey the bear thing, right? It’s, it’s one thing to have rules saying like, Hey, you know, you’re not allowed to have fires out in the woods because it’s dry during, you know, this time and this time. So there are rules like that, like, I think right now, in fact here, there you’re not allowed to put out to have fires going. Right. But it’s also common sense, right? And it’s also a se it’s also having general awareness and everybody being responsible for creating awareness around everyone else around them. So, and it’s part of why I was, was super exci, super excited to be on this podcast, is like, these are the things that we should not get tired of talking about.

Liz: We should be talking about these things. All the time right now.

Luke: Totally. To inform each other because if you don’t like, then you just basically have these like doomers or people that are not like, and, and we should be thinking about that, right? Like, like, I don’t mean to downplay, you know, the imagination there or what, how bad this stuff could go, but I feel like.

Luke: There’s so many just things that it may be common sense to, some [00:25:00] people aren’t to others, right? Like where, you know, okay, my job isn’t to think about tech all every minute of the day or, or try everything new, but I now have to use this thing, you know, and, and Right. It, it is one thing I’ve seen that it, it sounds like you’re seeing it too.

Luke: It’s just like a really cool, I don’t know. It, it’s, it’s almost like, People understanding the issues and, kind of coming together through these organizations to try and like, you know, hey, this is new, but here’s some ideas, right? Like, here’s some ways, you know, that we can work together and, and put some, some ideas out there that help.

Luke: And yeah, it, it’ll be interesting to see how it goes, but and really

Liz: practical solutions too, right? Yeah. So it’s like, okay, these are the things that. Companies in finance should be doing now. Mm-hmm. And this is what companies in finance should be doing next year and over the next five years. Same thing for healthcare.

Liz: Right. So in these high stakes applications, it definitely heartens me that people are coming together and having those tough conversations and helping each other across. Competitor lines even do the right kinds of things [00:26:00] with AI so that it’s not just about competition for the sake of competition, because that’s the issue that’s happening in the political sphere right now.

Liz: It’s like, well, you know, if we put regulations then other countries and are going to get ahead in the AI race and we, we don’t wanna stop innovation. I mean, that’s not a bad argument, right? on the face of it. Okay. But if that’s happening, then. We need to make sure that companies feel like there’s, they have a responsibility, and that the individuals inside those companies feel like they have a responsibility to do the right thing, whether it’s not putting out AI slop or making sure that agents aren’t going rogue, you

Luke: know?

Luke: Mm-hmm.

Liz: On the other side of it.

Luke: So, how can teams kind of balance the speed of innovation with ethical and transparent AI use? Is it a matter of training or are there other tools that they can implement to help with this?

Liz: Absolutely. Training, absolutely. Policies inside of the companies.

Liz: We’re at a point now where. [00:27:00] Every organization, whether it’s big or small, should have either, you know, larger or smaller policies around this stuff so that employees aren’t guessing. Mm-hmm. Also so that managers and the leadership aren’t guessing. So, you know, and, whether it’s an organization like Ray or there’s a ton of resources online now that people can just go to even if you had like a 10 point list of things that you can and can’t do.

Liz: Or should, or should not do. Like we’re very clear about, Hey, you know, our organization has centered on Claude. If you have a paid subscription to Claude, there’s certain kinds of data that you can put in certain kinds of data you can’t put into anything because we have confidentiality obligations to our clients.

Liz: So some things need to never. Go into the AI tools and hey, you can use chat GBT and perplexity, even if you don’t have a paid account for stuff that’s out in the public sphere already. So like, our teams know what to do and if [00:28:00] we’ve make it, we made it very, very safe for people to ask questions. Hey, can I use it for this?

Liz: Can I use this tool for that? So there’s no weird around it. Uh mm-hmm. And I think having that kind of transparent, open communication. Between team members and team members and managers is absolutely critical for this to be successful.

Luke: Awesome. Yeah. I, Liz, you’ve been so, gracious with your time and, and it’s been a fantastic discussion.

Luke: Really enjoyed it. Is there anything we didn’t cover that you, you want our audience to know about? Just kind of as we, as we close this out.

Liz: I would say that just play with the stuff, right? I mean, even if it’s not germane to your business, like play with lovable or play with famous. You know, create an app, create a website, you know, create some graphics, do some things so that you have familiarity and and so you’re at the forefront of this.

Liz: And then the other thing I would say for job seekers. Do that so that you can put that you have AI [00:29:00] experience on your resume. Because first of all, resumes are being looked at by AI bots now, and this is, there’s no secret around that. So, and they’re looking to see whether you have ai. Experience of, of some sort on there.

Liz: And second of all, hiring managers and companies are looking for that. It’s kind of like back in the day you’d have Microsoft Word or PowerPoint on your resume that you did. Right? Like some of our earlier jobs, right? In the beginning of our careers. Totally. Um, But you know, now whether you are an entry level or a senior person.

Liz: Get relevant AI experience for your role that you’re trying to go into. So, you know, say you’re gonna be a pro, you wanna be a product manager, like there’s tools and stuff that PMs are using. Get familiarity with those tools and stick ’em on your resume.

Luke: That’s fantastic. That’s a great point too, and I think it’s an awesome one to end on.

Luke: Like you gotta play with this stuff. Like we were even doing like a bedtime stories for my kids, you know, like, where it was just, it’s just amazing. I can use these things for all sorts of stuff and you find out interesting quirks and new things just in that [00:30:00] discovery process too.

Luke: So I think great advice. Finally, like, where can people follow you in your work online or, or if they wanna reach out and, and say hello?

Liz: Cool. So we have two companies, so, spring Catalyst, so it’s spring catalyst.com, and the other company is Bava Communications. Sound, B-H-A-V-A.

Liz: Communications rhymes with Java and would love to hear from You’all.

Luke: Fantastic. We’ll put both those in the show notes too. Cool. And, and Liz again, I, I really appreciate your time. I really enjoyed getting into these topics with you and I’d love to have you back to check back in on things too as things progress.

Liz: Awesome. Would love to do that. Thank you so much. Really enjoyed those. Awesome. Thanks. Bye.

Luke: thanks for listening to the Brave Technologist Podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave Browser, you can download it for free today@brave.com and start using Brave Search, which enables you to search the web privately.

Luke: Brave also shields you from the ads trackers and other creepy stuff following you across the [00:31:00] web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • The business benefits of embracing AI, including practical strategies to overcome fears and successfully adopt artificial intelligence throughout your team
  • How AI tools can complement, rather than replace, human creativity
  • The ongoing importance of fact-checking in the digital age
  • Principles of responsible AI use and the policies that support it
  • Ways to maintain authenticity with AI-generated content

Guest List

The amazing cast and crew:

  • Liz Zaborowska - Founder & CEO of Bhava Communications and Spring Catalyst

    Liz Zaborowska is the founder and CEO of Spring Catalyst, where she helps teams optimize performance and navigate AI adoption. With over 20 years in technology and deep expertise in team dynamics, she empowers organizations to transform how they create, communicate, and collaborate in the AI era. Liz has worked with AI companies for more than a decade and served in an advisory capacity for the Responsible AI Institute.

    Liz is also the founder of Bhava Communications, an award-winning marketing, PR, and social media agency that has helped hundreds of enterprise and consumer technology companies stand out as category leaders. Prior to founding her own companies, she ran technical, product, and corporate marketing on the tech startup side. She studied biology and theater at Tufts University, bringing a unique perspective that combines scientific rigor with creative ideation to help businesses and individuals thrive in our rapidly evolving technological landscape.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.