Future-Proofing AI: Ethics, Accountability and Governance
[00:00:00]
[00:00:04]
[00:00:38]
[00:00:42] Luke: Jo, welcome to Brave Technologies. thanks for joining us here.
[00:00:45] we’re at the AI summit here in New York, and you’re speaking at the conference, you guys kind of a preview of what you’re planning on talking about.
[00:00:51] Jo: Sure. actually, this is my first time at the conference. First time speaking at it as well. and today we have this lively discussion with people that I’ve worked with before. So it’s actually an [00:01:00] exciting conversation for us to have. I’ll be joined by Leigh Feldman from Visa, Alaa Moussawi from the New York Council.
[00:01:06] Orrie DInstein from Marsha McLennan. And our,moderator is Harry Valetk from Loeb and Loeb. So these privacy professionals who’ve been dealing with privacy, for years and are now delving into trust and transparency when it comes to ai. And so we’re gonna be talking about how businesses can navigate the challenges of AI governance and try to provide some practical insights, how to measure it, how to talk.
[00:01:27] To your C-Suite about it.
[00:01:28] Luke: Awesome,
[00:01:29] Awesome. No, that’s great. and I mean, just kind of, you touched on trust on the last bit, right? Like, and we’ll dive into privacy too, because it’s kind of our bread and butter here too. But,what strategies have you seen work well with kind of building trust into AI system from your point of view, like working in the space?
[00:01:42] Jo: yeah, I mean, I think because I grew up, in privacy, and I grew up in financial, services, companies that really cared about trust as the kind of cornerstone of privacy. I think trust has also been the cornerstone of successful organizations building loyalty through trust. And so I think long before [00:02:00] AI became a significant part of the conversation, trust was at that center of how do you keep your cut?
[00:02:05] Um, I think with the rise of a I and it’s increasing visibility to consumers, they they know it. They see it. They hear about it. They want to partake in it. The importance of trust is only grown. I think because there’s a foundation as well of distrust, right? They’ve seen it gone wrong. They’ve seen trust broken.
[00:02:24] And so I think I uses and processes way data in ways that feel more personal, feel more far reaching. And I think that makes. Trust and ethical imperative to how we use individuals data and how we serve up the value of AI.and so I think in today’s world, consumer trust isn’t just important. I try to drive home the message that it’s the currency of our businesses, it’s the oil of our businesses.
[00:02:50] Later, I’ll explain. I think it’s the engine, privacy is the engine that drives innovation. And so, especially in the age of AI, I think we, we need to think about how to build AI [00:03:00] insights. With trust as an integrated part.
[00:03:02] Luke: And let’s like, drill into that trust part a little bit more too. Cause I mean, I know there’s like ways that these systems can violate trust, right? Like, I mean, like, and there’s such an ethical. Angle with, with AI that wasn’t really there as much in the same way before, you know, when it comes to, cause you have, you have things that are training on data and like the data capture and then how that data is used and potential around kind of, you know, automated, discrimination, things like that, that are kind of coming to the forefront, how are you looking at that trust, when we drill into the details of like what this means specific to AI, like that’s different from before.
[00:03:36] So I think about
[00:03:36] Jo: So I think about it in three pillars, transparency, accountability and value. It’s an oversimplification, but I kind of think of things sometimes in threes. and so the transparency part is being able to explain it, right? and that, when you think about regulations, and we’ll talk a little bit about that later, too.
[00:03:52] In the regulatory space. What they’re serving up is trying to make sure that customers, consumers, employees [00:04:00] know what’s happening with their information. So that’s the transparency part. I would also elevate transparency to be, you know, from a leadership perspective, from an organization perspective, being transparent with your employees about what you expect AI to deliver and how you expect it to deliver, and that you want trust to be at the center of that.
[00:04:18] And so, I think there’s two sides to that transparency and then from an accountability perspective, organizations have to take responsibility for building ethics and trust into the AI, but also building a governance model that has oversight into the AI. So knowing that and testing the AI to make sure it’s doing what it was supposed to do, but then pivoting when it doesn’t do what it’s supposed to do and having some kind of open dialogue about how to rework it, how to fix it so it does what it’s supposed to do and doesn’t cause harm.
[00:04:51] And then value, we shouldn’t just be using AI because it’s a cool, shiny tool, right? It has to deliver something to [00:05:00] consumers. It has to deliver something to your employees if you’re doing something, you know, from a productivity perspective. So, it has to be valuable in order to balance the risk of it.
[00:05:08] Because there’s always some risk. there’s never going to be this complete de risking. And so that’s why I think of it as, um, Like I said, transparency, accountability and value.
[00:05:17] Luke: That makes sense. No, I think those are good. It’s good to think in threes too. It’s, it’s kind of a natural flow of things, right? you know, how do you see, I
[00:05:24] mean,
[00:05:24] regulation keeps coming up, right? Like, and, it’s almost like a cart before the horse thing. It seems like right now, from my point of view at least, I mean, where everybody’s trying to find kind of market fit, but like what’s your take on how regulations influencing.
[00:05:37] development or the progress of AI like you think it’s hindering things or you think that it’s okay or like what’s your general take on it like from a 20, 000 foot view we can go over simplified and drill
[00:05:49] Jo: right. right. Yeah, so I don’t, I think regulations are trying to catch up. So I don’t know that they’re in the hindering state yet, I think they’re trying to catch up with AI. I think that’s true in the privacy [00:06:00] space as well. I think what worries me about regulations is Is if we don’t have public and private sector come together to have the conversations around AI regulations, then yeah, there could be a business hindrance or an impracticality to how the regulations come to be.
[00:06:16] Right? And so I think bringing corporations to the table, having companies, you know, come together, even competitors come together to talk about what does responsible AI look like, then regulations can deliver. For something that is practical and protective.without that, I do think there’s a possibility.
[00:06:35] I mean, just in the nature of it being a patchwork today. Just in the nature of EU considering it one way. The United Kingdom, thinking of it completely differently, right? the APAC region, thinking of it differently. And then in the US, you couldn’t even describe if there’s a single approach. For any of it, right?
[00:06:53] And so, it’s every state for themselves.and so I think that makes it feel like a [00:07:00] distraction for businesses and even in that fact itself, that distraction is a hindrance, right? and it makes companies question how to do the right thing. And so I think that goes back then to the other answer, right?
[00:07:13] Like put trust in the center of it, document everything that you’re doing. That’s the accountability part and, and, and hope for the best if you have, if you’re well intentioned that way, then I think you can keep creating. With the regulations and at least have a defensible position
[00:07:28] Luke: much of it feels like privacy, like it’s so much of it feels like, you know, okay, GDPR came out and at least at least Europe had a definition for user data, right? Like, and then it’s just like America is just like whatever the company say is, it’s kind of, and then the states like you’re saying, right? Like, but, but yeah, like it is, it does seem like lot to kind of manage for someone trying to find market fit, right?
[00:07:49] While you’re also like. Having to make sure you’re not doing the wrong thing necessarily, or trying to think ahead.
[00:07:55] Jo: Yeah. No, and, and I think you’re right. I mean, it is so much like [00:08:00] the phases we’ve already gone through in privacy that I think that’s why a lot of privacy pros have stepped into the AI space. It’s principles are very similar, it’s protections are very similar, and it also feels like we can counsel the rest of our company.
[00:08:14] So we’re, we have a seat at the table. We’re not the only ones invested in AI, but we definitely can counsel on how to build the right frameworks or how to leverage existing frameworks to do it right.
[00:08:23] Luke: How do you feel about the current state of like, you know, users interfacing with AI their privacy? Like, do you feel like people should be more careful about how they’re implementing their solutions right now? Or do you feel like folks are getting it right or not thinking about it enough yet?
[00:08:38] Or, you know, generally speaking, of course.
[00:08:40] Jo: Yeah, you know, I think it’s a, I think it’s a whole range, right? I think because it’s unregulated, even just. in saying everybody should have a responsible a I framework and somebody who’s overseeing it. Right. Some of the regulations are popping out that expect at least that. I think there are varying forms of safety [00:09:00] being met or safety not being met.
[00:09:02] Again. I think if you put it in the hands and control of the individuals so they at least know how their data is being processed not to give away your secret sauce, Not to give away what’s the algorithm that’s producing this, but at least to explain, you give us this information. We will produce something that’s personalized for you, I think is, is what’s important.
[00:09:21] But I think it’s so mysterious right now to consumers and actually to employees too. I don’t think, it’s like widgets. Different parts of companies are working on different parts of the AI and not everybody and not many are all seeing, I’m like, for what purpose, for what objective? So I think AI literacy has to be.
[00:09:41] Increase the same as financial literacy, right? It has to be like AI literacy for all within a company. If you’re building it and ask for consumers as they’re consuming
[00:09:50] Luke: Yeah, I think it’s right. Like in especially on the awareness side, right, because I’m seeing it more with you know, enterprise use of AI where it’s almost like they’re treating it like [00:10:00] operational security around it. Like, okay, don’t click phishing links. Okay. Do we put our data into this thing or not?
[00:10:05] Like, how is that getting purposed? what sensitive info? how does the world now know this thing I didn’t want it to know about? You know, like, all those types of things. It’s definitely a lot of things to navigate as far as, like, inputs and outputs on this stuff, and users data, and how that’s applicable, and it does tend to seem all over the place, too, a little bit, like, with the implementation.
[00:10:23] So, it seems like, in some cases, some of the most advanced applications Tools that are out there are kind of the least transparent on the privacy side or in some circumstances. So I think it’s good. I’m glad that you’re thinking about this way because, like, more folks need to be
[00:10:36] Jo: Yeah. I mean, it would be great if every interaction with AI, you could question, right? Just like everybody who’s played with chat. GPT has probably asked, tell me how you got that answer. Tell me what sources you’re using for, for that answer. It would be great if you could ask the chatbot on the app. What data are you collecting in order to give me that recommendation?
[00:10:58] Or, how did you come up with that? [00:11:00] You know, as just like a quick answer, it also helps the individual understand, like, oh, that wasn’t me, that was my husband. So like, for this question, filter out that,
[00:11:10] because, you know, we are both in the app. Filter out all of that, and, you know, serve more personalization.
[00:11:16] Because I do think people are delighted with the idea of the app. This hyper personalization experience,
[00:11:23] but they want to know what goes into it and if it’s accurate and if it’s if it represents them.
[00:11:28] Luke: And if you already have an existing service to like, How much of what you already collect is just being purposed? Like how much of it can cross over, right? and be part, or how much to like AI integrate into the existing service, right? To where, okay, I don’t have to collect a ton of new data.
[00:11:42] you’re getting enough with the, with what we’re
[00:11:43] Jo: Right. Well, and I think that’s that’s what’s great about brave as well, right? Like thinking about the kind of data that absolutely needs to be processed about the individual versus do I know enough about people like this? Do I know enough about with with not real data with synthetic data [00:12:00] that I can still produce?
[00:12:01] A hyper personalized experience, you know, I think not enough companies question that and definitely tech people who are building a I lean towards I want real data because real data I have it right like it’s easier at my fingertips but if you actually challenge them with can we make this real data synthetic data or how can we identify it or anonymize it and can you have the right outcome if you give if you give them the right questions the right challenge problems.
[00:12:30] your best technologist can come up with the answers for how to do it and build it in more trustworthy or privacy friendly way for sure.
[00:12:37] Luke: Yeah. Yeah. it seems like it comes down to just like How much are you trying to do the right thing with this versus how much are you trying to use this technology in the wrong way, right? Like, or in a way that might benefit certain bottom line, but not necessarily be, you know, in the ethos that’s kind of helping the user,
[00:12:51] Jo: Yeah, I think that’s where the key is for your organization and your leadership team too to kind of really make the connection [00:13:00] on, you know, I, I’ve explained I have these strategic objectives. I’m really excited about that. All the places we can implement a I either to make your job easier to save money to like customers to drive revenue, but make the connection as well to your core values.
[00:13:16] Like we want to do it with these tenants in mind, and then I think the builders understand and get it right. But without making the connection on the strategic objectives and the core values, I think sometimes there’s a disconnect, like a race to To market that misses the boat sometimes on that value part.
[00:13:35] Luke: totally. And you touched on a couple of different use case of personal question, right? I mean, like, when you think about these different ways that AI, I mean, like. You know, we’re booking up and table all these different ways that people are integrating with services like what use cases that you see right now are super interesting to you just as a person using AI,
[00:13:52] Jo: I feel like I have to go in and test things because I need to have the experience to kind of know or question [00:14:00] what’s the magic behind you know like where’s Oz book it’s out right now right so is there an Oz or is it a man from Kansas behind the behind the drapes and so you got In terms of exactly which ones I’ve been delighted by, or which ones I’ve experienced.
[00:14:16] I mean, I was in San Francisco, a couple of weeks ago. And I went in the, self driving car. Yeah. And I was very curious, like, what that was collecting on me from, So you have to download the app, and then you have to add your payment card information. And then you get to personalize for yourself, but real time, what the experience is going to be like in the car.
[00:14:37] And then you can do it while you’re in the car, too. You can continue to personalize the experience. All of that is driven, like literally driven, by AI. I have to say I was a little scared because it was going very fast. Faster than I anticipated. I would have liked that to be an option. Like, I don’t want it to go speed limit.
[00:14:54] I want it to go like five miles lower than speed limit. Because there isn’t a driver,
[00:14:57] Luke: Yeah, yeah,
[00:14:58] I know
[00:14:58] Jo: is a
[00:14:59] [00:15:00] vulnerable
[00:15:00] feeling,
[00:15:00] It was a very vulnerable feeling, but I had to do it. I would probably do it again to continue to test it. To continue to like See what are the parameters within which I, as a user, or as a consumer, can really personalize it and understand it, right?
[00:15:15] Luke: Well, is there like a wish case you personally would say oh I wish that I could have this thing now that AI could help me with
[00:15:21] Jo: I mean, if you asked me while I was a mom with very little kids, I would have loved the idea of a robot in my house, cleaning while I was like, you know, holding them and having fun with them. but now that there are robots, I’m not sure that I’m ready for that, maybe to do my grocery shopping or, or something like that. I mean, I, I use AI more and more every day in ways that like, I’m not thinking about, right? Like from the first time I got an Alexa in my house and thinking it was just going to stream music, right? To now asking it to remind me, Oh, when I was cooking Thanksgiving dinner, I [00:16:00] set every single like,and then I was also talking to it during the day, like remind me in 10 minutes to do X, Y or Z.
[00:16:07] So I felt like I had a little sous chef
[00:16:09] in the kitchen with my, you know, Alexa in there. And then, you know, we, we use it from a security perspective all around our home, right? So, with different cameras and systems, on the outside of our home, which gives us a different kind of safety and security when we leave our teenage daughter’s home or when we go on vacation.
[00:16:28] That. I wouldn’t have pictured, you know, as a kid
[00:16:32] that I would have, right.
[00:16:33] That we had an old school alarm that you put a key into and turn it on, right? And so I put a key into and turn off when you got home.
[00:16:42] Luke: Are there some key metrics that enterprises are, should be looking at when they are evaluating kind of AI performance or privacy or any of these things we’re talking about here? Like,
[00:16:53] Jo: Yeah,
[00:16:53] I mean, I think companies have thought really quickly about metrics that
[00:16:57] prove return on investment, that [00:17:00] prove driving revenue, that prove that customers are engaging with it. But I do think that there’s a wider. Um,I think there’s a broader spectrum of metrics that companies need to be thinking about, especially coming out of my mouth, right, you know, privacy and ethics, right?
[00:17:13] And so I think measuring how well the AI models perform are not just about those numbers, right? But they’re also about ethics, safety, value to the business. I think of those metrics as your health check for accountability, right? And so there are a number of metrics to make sure that you’re being as accountable as you can be as well.
[00:17:32] That your organization has the right roles and responsibilities within it to care about those metrics, right? And so, I think for ethical compliance, you want to think about is the AI fair? Is it unbiased? Right? So, for example, if there’s any patterns that show that it favors one group over another, sometimes it wasn’t known when those tools first came out, right?
[00:17:55] either it favored somebody or it completely excluded another [00:18:00] group. and then you have to track this with things like bias detection rates or metrics that check for unintended impacts. And as soon as you recognize an unintended impact, you have to have your tech teams ready to fix that. To either shut it down from that for some time period and come up with a solution.
[00:18:17] But you don’t want to prolong those unintended impacts for sure. And I think you have to be transparent with customers too and accountable to customers. This is all so new and happening so fast. I think being transparent and accountability, like I say with my daughters, is about taking responsibility.
[00:18:34] So saying, we didn’t anticipate this, but we identified it, or you identified it for us, and we’re going to do something about it, is as important as, you know, not identifying it and doing
[00:18:44] it. It’s more important, right? Yeah, It’s the
[00:18:46] way you respond. It’s not that you made a mistake, it’s how you react and respond in light of that mistake.
[00:18:52] I think that’s
[00:18:52] important.
[00:18:53] Luke: Do you, feel like users have enough, like, feedback mechanisms in, in AI systems today,
[00:18:58] Jo: No.
[00:18:58] I don’t think so. And I think [00:19:00] that’s that magic behind the scenes. It’s not easy to call up,
[00:19:04] How did you give me this personalization? What made you do that? When you go to Netflix, it’s kind of obvious, right? Because you’re, I’ve watched these ten movies, and that’s why it’s now serving me rom coms,
[00:19:19] right?
[00:19:20] But it doesn’t know that it was my grandma, who was, you know, sleeping over for the weekend, that picked them. which is why now We all have our own account in
[00:19:28] my home.
[00:19:30] but, but it’s obvious to you in that, right? So maybe they don’t have to be as transparent, because it’s every click it gives Netflix information about how and what to serve you.
[00:19:42] Not every app, not every website, it’s so obvious what are, what’s going into the decisions for how to serve you information. And so, I think that’s part of the transparency responsibility that companies have. How do you Explain how you got there in a way that doesn’t give away your secret sauce, but has [00:20:00] customers understanding and actually saying, I’m actually not interested in that.
[00:20:03] That was like a point in time where I was interested in that, but serve me something different or let me actually self select what I’d be interested
[00:20:11] Luke: Yeah. Yeah. No, I think that makes sense. A lot of sense. And, um, you know, One thing we try to do on the podcast too is kind of dispel a lot of the hysteria, right? Because, you know, People get kind of, dystopian with this stuff pretty quick. Like, But, as somebody working in this space, especially on the privacy and ethics side, like, are there areas where you are concerned with, the current state of affairs, with AI and like, that, not things that might not keep you up at night, but things where you’re like, gosh, we really need to get better about this or, or, or something like that
[00:20:39] Jo: Yeah, I mean, I think that’s where you were opening to with regulations. I’m worried that the pace of a I is as fast. about, two, three years ago, when we first heard about open AI Like, we knew about AI earlier, but that, at the fingertips of an ordinary consumer, [00:21:00] is very new. And if you think about how advanced it’s become because of all the data that’s going into generative I’m worried that, We will lose control too quickly and not get into a space that we expected or anticipated too quickly.
[00:21:15] I’m not saying regulations can be the solution for that, but I do want the private sector with all of its geniuses, right, to come together with the public sector who wants to protect individuals and consumers to try to come up with the solutions not to derail it, not to stop it, but to do it as responsibly as possible.
[00:21:35] I don’t think that’s clear enough today, and that’s what scares me. I don’t, I think there’s enough out there that doesn’t have that expectation and isn’t doing it for good. And then if you put it in nation states, the hands of nation states who aren’t out to protect anybody but themselves, and do plan on harming others, they’re using the same generative AI tools to [00:22:00] harm.
[00:22:00] Individuals, right? Cybersecurity is having a hard time keeping pace with the A. I. Tools that can hack into your systems, right? And so I think that’s what worries me is how can we come together today? The time is now. The time was this past year. The time has got to be as soon as possible. All of these amazing companies that are here.
[00:22:22] We’ve got geniuses at this summit,
[00:22:24] right,
[00:22:24] Geniuses
[00:22:25] at this summit. We have, and I don’t, we don’t have that same technical acumen with regulators, right? And so it’s not just about being scared, it’s actually using it in a way that we can feel comfortable with it for our future. Otherwise, it feels like the scary movies that we’ve all watched in the past 10, 15, 20 years can become a reality if we don’t actually take charge
[00:22:47] Luke: Yeah, no. Well, it seems like it’s also interesting with AI compared to like, privacy with advertising where programmatic advertising had kind of proliferated to a point where, it was like, the industry, like, Had way more [00:23:00] influence because it was already in practice. It was already, you know, adopted right where by the time regulation came along, whereas now it’s like people are still trying to find market fit with this stuff to where, like, it seems like there’s a better opportunity now for regulation and for ethics and all these things to actually get a better foothold compared to, like, where things were with advertising, where it’s just like, okay, now, if you want a free Internet, you basically got to give your privacy, like a kind of trade offs, right?
[00:23:25] Where where it’s like, At that point, it scaled so quickly that it was there. So I totally hear you on the concern. I think, yeah, the sooner that everybody can kind of rally the better around some practical stuff, you know,
[00:23:38] I feel I, uh, I,
[00:23:40] live
[00:23:40] in the dystopian space with this stuff, but I think it’s one of those things where like I’ve actually, it’s been, we’ve had on academics, we’ve had on regulators, we’ve had on people, business leaders, right?
[00:23:51] The amount of. Concern that academics and business leaders have around this stuff has been humbling because you don’t see it normally, right? Like until they actually talk about it and you’re [00:24:00] like, okay, cool. Like you guys care about this a lot more than, than people and realize that you do, right? Like,
[00:24:06] Jo: I would agree. I mean, I didn’t even think about that angle. I mean, you, you see so many business leaders actually going to Congress, going to DC and not because they’re like, don’t do this. Right. Which is, you know, typically what you see from a lobbying Perspective is like something that they don’t want regulated here.
[00:24:23] They’re like, we want it regulated. We wanted in a pro business, pro innovation way, but we, but we want to do this responsibly. So like, let’s build this together. Right? And so there are so many business leaders who are trying to be at the forefront of influencing.
[00:24:38] Luke: And I think, too, like, there’s been such a huge explosion of, like, open source development on AI, and, and, like, I think that that’s impacting things in a positive way, too, because it kind of, like, helps with that transparency part, where, like, okay, Commercial applications for open source technology. If there’s already a familiarity there, it already has adoption.
[00:24:56] Like, if we can add the transparency by integrating open source [00:25:00] things into commercial services, then we’re helping, you know, lift the bar too, in a way that’s already kind of got some legs with, traditional web services and things like that too. So yeah, yeah, yeah. It’s my, my point of view. is there anything we didn’t cover that you feel like you want our listeners to know about?
[00:25:15] With what you’re doing.
[00:25:16] Jo: I guess it, you know, if you’re a technologist in a, in a company building the AI or working on some part of the AI that’s being built, I would say if your company isn’t doing this already, suggest cross collaboration governance bodies, call it a governance body if that’s not cool, name it something cool.
[00:25:35] but, but coming together with your, especially your privacy compliance, legal cybersecurity people. and deciding how to build all of this in from the start, because I think where privacy and legal get a bad rep is that we look often like we’re the brake in a car, or we’re the roadblock on the freeway, and I think of us as part of the engine,
[00:25:58] right,
[00:25:58] we’re gonna help you [00:26:00] get to that destination, and we can help you get to that destination fast, especially if we’re built into the engine from the start, right, we’re not an add on To the fact.
[00:26:09] And so, I think if you don’t know what your company’s values are here, if you don’t know what your company’s expectations here are when it comes to trust and having privacy built in or ethical obligations built in, I would say start the conversation and start. Start small,
[00:26:28] Luke: Yeah.
[00:26:28] Jo: but bring all those people into a room so that you can get it right at the start.
[00:26:32] Luke: and kind of in that same vein, like, for people who, I mean, we have a lot of people that are in tech that are listening and, and, you know, want to do more on the ethical front and, and on the privacy side. And I don’t have people that are. specialized in privacy on, is often like, what would you recommend for people that might want to get into this area?
[00:26:49] It’s not necessarily something they teach in school, right? Like, you know, like any, any pointers for somebody who is working on the privacy side for people that might want to kind of get into that area?
[00:26:58] Jo: There’s something very [00:27:00] instinctual about privacy, and I think it can be taught to anybody. I shouldn’t say that because I’ve made a career out of it, but like, I think it could be taught to anybody. and I think, I think most technologists actually have been, have understood enough about like what is important to privacy, but to change it to be not just about privacy, but to be about ethics and trust broadens the conversation.
[00:27:21] So it’s not just about meeting a regulation that has privacy in the title, but think about it from you being the consumer end of it. What would you want or expect is happening behind the scenes with the thing that you’re engaging with? And do you feel like it was clear enough to you? Do you feel like you understood it?
[00:27:42] putting yourself in the consumer’s position and making sure that you could feel comfortable that. If you’re working at a company, if you could feel comfortable that your company is upholding everything it says it does,
[00:27:53] right? I think that’s, that’s the ethical part, right? You say you do X. Do you really do X? [00:28:00] Do you really do Y and ABC? And, and why not tell customers all of that, right? but then also when something goes wrong, like I said, again, I would say, raise the alarms, right? Call those friends. It’s why it’s important to know. Who’s illegal and compliance and privacy because things will go wrong.
[00:28:18] You will not always get it right. We will continue. There’s so much data coming in that you weren’t anticipating or expecting that it’s really how you respond to when those things go wrong and how you bring that up and identify ways to improve it.
[00:28:33] Luke: Yeah, that’s awesome. Like, software is made by imperfect people, right? Yeah. You gotta own it, right? Like, where can people follow along, with your work Are you out there on LinkedIn.
[00:28:42] Jo: I’m on LinkedIn. I’m not one of those people that’s like, I couldn’t call myself a social media influencer in any way, shape or form, but I am on LinkedIn. I, am at, Booking Holdings as well, so you’ll see sometimes posts on Booking Holdings as well and on any of our brands.
[00:28:57] Luke: Awesome. Oh, well, Jo, really appreciate you, coming on the [00:29:00] show today. And, and, and yeah, like. I think people learned a lot from your point of view on these topics. And yeah, I really appreciate you coming by. Thanks.
[00:29:07] Thanks for listening to the Brave Technologist podcast.
[00:29:11] To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com. And start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.