Back to episodes

Episode 36

AI Revolution: Transforming Business, Jobs, and Regulation

David Westera, AI & Data Leader at Adidas, discusses the pressure that many companies feel to adopt emerging technologies, and the biggest challenges companies face when trying to implement AI solutions. He also explores the contrasting regulatory landscapes of the EU and the US, highlighting the challenges and ethical considerations that come with AI advancements.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:29] You’re listening to a new episode of the Brave Technologist, and this one features David Westerra, who’s a data and AI leader driving innovation at Adidas. He specializes in helping companies achieve transformative growth, competitive advantage, and maximum profitability through AI and machine learning powered product deployment and integration.

[00:00:46] He spearheaded transformative AI and machine learning initiatives across various industries, including fintech, sports apparel, and banking. In this episode, we discussed the pressure that many companies feel to adopt emerging technologies and the biggest challenges they [00:01:00] face when trying to implement AI solutions and how to overcome these.

[00:01:03] How we envision the role of human workers evolving as AI becomes more prevalent, and the future of AI agents and the impact they can have on our lives. And now for this week’s episode of The Brave Technologist. David, welcome to The Brave Technologist. How are you doing? I am very well, I’m looking forward to this interview.

[00:01:26] Why don’t we just for the audience here, let’s kind of set the table a bit. Give us a sense of like, kind of how, how you ended up doing what you’re doing. And was there anything in your background that kind of helped get you to where you are?

[00:01:37] David: Yeah. So maybe it’s best to start off probably when I was at university.

[00:01:42] So I was studying international business and finance, and I actually wanted to go into finance in maybe, let’s say for a big bank I won’t name certain names, but I think after. After doing a couple of accountancy courses, I realized actually I probably didn’t want to spend the whole of my time [00:02:00] going through Excel’s budgeting and closing accounts every month and every quarter.

[00:02:04] So I did a lot of research actually, I remember, you know, in my second and third year, especially really going through a lot of sites and websites trying to understand the different types of roles out there and There was one saying that kept coming up and you probably remember this yourself. It’s that data is the new oil I haven’t heard this for a while, but you know ten years ago Everyone was saying data is the new oil, data is the new oil, and I thought what I wanted to do was Enter an industry where there would have a lot of growth and a lot of opportunities for myself So I started looking at internships and I managed to get internship in the netherlands for general electric and I joined as an analyst and Funny enough.

[00:02:45] I ended up doing a lot of excel work and Knowing my and knowing my nature. I started thinking. How can I make my job easier? You know, how can I save time and build in these excels and do other things or leave the office early? You And I [00:03:00] started building macros and that was really kind of my first insight into.

[00:03:05] Not just using data to let’s say make decisions around the business, but especially around automation. And that actually, that role grew. And after I graduated from university, I got brought back to lead a project with my senior vice president to do a big business intelligence transformation projects.

[00:03:24] And the two main goals were there were to automate the annual reports, which went out every year, which took months to build and also automate the monthly closures. And That gave me a real insight of first. I think building a product building a tech product. I love the idea of actually Having let’s say a finished article at the end and sitting back and seeing look at this amazing tool that we’ve built and the business Are using it but also Saving people’s time to be quite honest.

[00:03:54] I think that was a big thing and making people more efficient and Being able to, I’m a bit of a people [00:04:00] pleaser, I think. Everyone very happy that, you know, account closures were happening quicker. And they were, you know, finishing their job earlier. I think I got quite a kick from that and it accelerated from there really.

[00:04:11] Luke: Yeah, there is something about people using the thing, right? and getting some enjoyment or fulfillment out of that. Or even like you said, making it easier for people. I totally, you’re, you’re, you’re, you’re. It’s a, it’s a bit of a time warp in my mind too. I remember just building macros with the ad tech stuff back when, you know, BI business intelligence stuff in that kind of craze and everybody was surprising amount of manual work happening on spreadsheets back then, you know, like from big companies, even, you’re like, okay, we’re all in the same boat here, you know, basically, but no, that’s fantastic.

[00:04:40] That’s super interesting. Like when you look back on how you started there, how much time do you think you’ve cut out of that in years with what you’re able to do with generative AI now? Like when you look at things like you were doing, like building macros and stuff like that, just that table work, how much time, if you had to, if you had to put it in a percentage, how much time do you think you’re saving now with some of these [00:05:00] tools compared to back then when you were building out all these reports and things like that,

[00:05:04] David: I would probably say 70 80 percent of that job is now automated by co pilot and generative AI.

[00:05:10] I’m not sure if that role still exists, and I probably not, I hope not, in the form that it was. Yeah, 70 80 percent for sure, and I think that final 20 30 percent is the storytelling and presenting to senior leaders of, you know, what your insights are and what the data is telling you that everything from optimize or building the excels optimizing it to building those presentations, you know, pilot almost has that covered now, which is quite impressive.

[00:05:37] Luke: With everything with AI, there’s a lot of hype, a lot of people saying the buzzwords, right, like ubiquitous, and it’s going to be everywhere, this is all, you know, state of the art, whatever. I mean, like, fundamentally though, you have a lot of people in organizations who see and can touch it and get excited about it.

[00:05:54] What are the fundamental pillars organizations kind of need to start embracing for successful AI [00:06:00] development and implementation, you know, like from a real world perspective, from your point of view? Yeah.

[00:06:04] David: Yeah, first that comes to mind is strategy. I think it’s vitally important that an organization has data in the core of its strategy, and not that it’s only Let’s say just saying the words, you know, because they feel like they have to do it, but the organization’s brought in on it I think that’s vitally important and Then you go into data governance and building a robust data framework I was speaking to quite a lot UK recently who work for let’s say small to medium sized enterprises and One thing that kind of took me back, I think have had my head in the sand for the last five years, a little bit building models, building machine learning models.

[00:06:41] And I think I was going through life with the assumption that everyone was doing the same thing. And actually, what I’ve realized is that data governance is still such a huge challenge for most organizations with not just collecting and storing the data, but understanding and knowing how to use it to optimize the business and optimize that decision making.[00:07:00]

[00:07:00] And the final piece, I’ll say, you need to invest in tech talent. I think, you know, these solutions don’t get built without having talented people who are building the code and, you know, building these frameworks. And I, I think those three aspects are key to any organization, really being able to even start thinking about adopting AI or building AI solutions.

[00:07:21] Luke: Yeah, I’m really glad you touched on the data governance part too, I mean, because it’s something where it’s just dealing with this on the privacy side, right, where AI is so built with the data in modeling and all of that, but we’re still in this kind of era where you have regulation in the EU and in a whole bunch of other countries, but there’s not like some uniform standard, right?

[00:07:41] Is that one of the major challenges that companies face, do you think, or are there other challenges that stick out in your mind when you see companies trying to implement AI solutions? From your point of view, like are there ways that people can kind of frame it to try and overcome these challenges?

[00:07:55] David: I think the first thing is change management, and maybe before deep diving [00:08:00] into change management, I think there’s a lot of hype and buzz around AI at the moment, and I think there’s a lot of almost peer pressure for leaders and organizations to show their workforce and also show their customers how they’re adopting AI, and I’m seeing examples where I think people are rushing towards technology without really understanding the underlying business case.

[00:08:22] So I think really identifying where in the organization actually you can excel by using machine learning and AI, that’s vitally important. And really, Making sure you have a strong business case and it’s brought in with those stakeholders, I think is the real first strong step. And then it’s that change management side.

[00:08:40] You can’t build things in silos. Again, I think we’ve all seen products and solutions being built and not being used once they’re finally finished. And a lot of that is by not involving those end users from the start and not involving them in the design process, the testing process. It’s built by data engineers and data scientists, [00:09:00] but it’s a collaborative organizational effort to really design the solution and really to get it over the line into production.

[00:09:06] So those are the two aspects I see some of the biggest challenges. Yeah.

[00:09:10] Luke: Are there any risks that you see companies just kind of blindly jumping into that kind of freak you out a little bit or are concerning? I mean, like given how new all this is, I feel like, especially cause you, you touched on it really well around the peer pressure.

[00:09:22] I feel like there’s, peer pressure. I feel like there’s also just like board pressure too, right? Where all these folks are saying, Oh yeah, you know, you’re super, power, your engineers and all this stuff. stuff. And it’s not quite that easy, is it? I mean are there risks? Like, are you concerned at all about how that pressure affects other risk areas for businesses from what you’re seeing?

[00:09:41] One

[00:09:41] David: story comes to mind, and I won’t go into specifics of names. I was speaking to a couple of companies selling certain AI products over the last couple of months. And they were discussing, they were giving me their proposition and telling me how their solution works. And, the biggest key word they’re saying is that Gen AI, you know, we’re [00:10:00] using Gen AI and AI technology to transform your company.

[00:10:03] And this solution will, you know, solve all the problems in the world. But as I said, with a couple of simple questions, I realized actually they’re not using any generative AI. Yeah. technology whatsoever. At most, they were using a very rudimental machine learning model, and it was actually more workflow based solution.

[00:10:21] And you can call it AI because it’s automation, but they were really jumping on that Gen AI buzzword. I think there’s a risk there. I think, you know, I’m seeing a lot of companies where they’re selling quite simple business cases, but they’re marketing it around that AI buzzword. And I think if you don’t have a full understanding, maybe of technology, or if you do feel that peer pressure, you feel like you have a bit of budget and you want to invest in this area, it’s a very saturated market at this point.

[00:10:49] And I think it’s, you need to be very careful of who you partner with and what solutions you buy. Cause you might be buying a very expensive solution, which solves quite a simple problem in the end. [00:11:00] And generative AI is amazing, you know, it’s a great technology, but you don’t need generative AI to solve all AI business cases, let’s say.

[00:11:08] And yeah, you need a little bit of know how to, how to navigate the market.

[00:11:12] Luke: Yeah. I think it probably touches back on your tech talent, right? Like investing in folks that actually know what they’re talking about and then making sure that those people talk to marketing. How do you envision the role of people evolving as AI becomes more prevalent in business operation?

[00:11:28] How does this all impact the average worker at a tech company?

[00:11:32] David: While you were asking that question, I had this thought that, you know, AI is, you go back to films like The Minority Report, it’s a lot of the time linked to this dystopian future where there’s very much, let’s say, little human interaction and, you know, that human nature kind of goes away.

[00:11:48] Thank you. Another thought that comes to mind is that even the NVIDIA CEO recently said that software engineering roles won’t be needed in the future as this can be done by generative AI. I’m not too sure if [00:12:00] anyone is correct there, but one thing I’m certain is that roles will change, you know, our lives will change.

[00:12:05] Like, I don’t believe in this idea that all jobs are going to disappear because of automation and AI, but I do think jobs will change and skill set requirements will adjust and It is up to ourselves and society to make sure, you know, we still have that right set of skill sets and employable with this, let’s say, new wave of technology coming along.

[00:12:26] Take into account a software engineer, you know, in, in the future, quite possibly, they won’t be building large scripts of code as they’ve done previously. But we still need testers, we still need QAs, we still need Ops, and that still needs that underlying technology and coding skill set. And I think, again, it just, I think it’s an example of.

[00:12:45] Yes, roles might change, but I don’t believe roles are disappearing and I don’t believe software engineering as a chapter is fully disappearing. I just think maybe that responsibilities will change slightly.

[00:12:58] Luke: I totally agree with you too. I mean, I think we’re [00:13:00] even kind of seeing it now where there have been so many, like, Such a proliferation of just people script kiddying basically with this stuff, where it’s almost like humans can sense when something’s written by AI now or generated by AI now, just like how they used to be like spam replies and things like that, where you’re like, okay, this isn’t real.

[00:13:15] I mean, it’s clever. It’s more clever, but like, there’s going to be that human element, right? You have to have the human touching a thing for people to connect with it. I mean, in, real way, I think, or authentic or something like that. Speaking of this, like I mean there’s a lot going on in AI right now.

[00:13:29] which developing area in AI is like really interesting to you, or, one that you’re really kind of sinking your teeth into even in your free time.

[00:13:37] David: I would say the short term next big development is just accessibility.

[00:13:41] You know, these models will continue improving, but just having this technology more accessible within our daily lives, that will be the first step, but that’s not, I would say a major development. I think what is happening now is that we’re having AI agents. So it’s a combination of various different types of machine learning models, generative models.

[00:13:59] And I think the [00:14:00] best way to at least the next big development I’m looking for is that AI agents that act as a personal assistant. I definitely needed a personal assistant in my life, and I think that can be anything from managing your calendar to organizing reservations either for work or for your personal life.

[00:14:17] I’ve even heard, I think it was Bumble maybe, that they’re having now AI agents that, you know, Do the dating scene for you. So gone are the days where you’re texting random strangers and see if you have a connection or not, is that you can have an AI agent act on your behalf while you do something that you’re more well for doing.

[00:14:35] And I think, yeah, it’s freeing up your time to really spend on where you want to be. And I think, yeah, having someone to. Take away some of those more, having a technology rather to take away those more monotonous tasks where I don’t want to be spending a lot of time on, but still feel like I need to do, yeah, an AI agent.

[00:14:51] that’s what I’m looking forward to. It’s

[00:14:53] Luke: awesome. I think it’s really fantastic. And yeah, I’m also very feel like very old and boomerish in that I [00:15:00] managed to somehow dodge this whole dating scene and element with these apps and things. Do you think that, I mean, there’s, there’s a lot of things happening here that are familiar with trends, right?

[00:15:09] But also kind of a little bit detached. One thing that really jumps out to me is just how much of a boom there’s been around the open source side of AI within the past two years. And especially where you have open UBT, but then all of a sudden you had like, like Hugging Face and Llama and all these folks that are even from big companies that are starting to open source things.

[00:15:28] How do you think that that’s impacting what people would have predicted around these trends or, or maybe that was baked in already from your point of view? Like, do you like that there’s so much open source activity around the AI space? Now, do you think it’s going to rapidly make things accelerate? Or what’s your sense of how that element of it’s changing things compared to other cycles or trends?

[00:15:48] David: I think it’s increasing the pace of innovation, also competition. Honestly, I think if this technology boom happened 10, 15 years ago, I don’t think it would have been as much open source. I think there would have been a lot [00:16:00] more kind of ring fencing of the technology and you know, you’ll have license fees and only maybe big companies will be able to afford it.

[00:16:05] I think because these technologies are open source is that you’re kind of, you’re seeing this arms race happen now, really with who has the best model, who has the most energy efficient model, you know, who has the cheapest model. And I think that is partly down to this technology being open source. And a lot of different people and a lot of tech talents really getting their hands on it, understanding it and really, you know, starting to think, okay, how can I improve this further again?

[00:16:28] That pace of innovation, I think, is a lot faster when technology is open source, because you’ve got more people looking at it and more people trying to improve it.

[00:16:37] Luke: Yeah, especially with a lot of concerns around it. You know, the more transparent folks are around the tech. And even if it’s, I mean, you’ve got hybrid situations to where pieces of it are open source and not.

[00:16:46] And, you know, it’s all helping to kind of, because you touched on this earlier, right around accessibility. I mean, because I’ve had a lot of conversations with a lot of people in AI and it is a recurring theme where people are just like, we have to have access for everybody for this [00:17:00] stuff really quick, or you’re going to see people fall behind.

[00:17:02] Right. Speaking of access and falling behind, like with things like, what’s your take on, on how regulators are approaching this space? Is it something you’re concerned about or do you think that they know enough to even really fully grasp it yet? Do you see it getting in the way, I guess, or are you seeing it inhibit like development at all from your point of view?

[00:17:20] David: I think has the possibility to inhibit development. I think there’s two different narratives here. I think there’s an EU regulatory narrative, and I think there’s a US narrative around regulation. I don’t see the US putting any AI regulation soon. Actually, I think Congress actually just assigned budget to start a think tank around AI regulations.

[00:17:39] So, they’re still very far behind in creating anything there. And I think it’s also important to remind ourselves that we do have regulation in place. We have GDPR, for example, and I think that’s a big part of it is how we use the underlying training data and ensuring that data is collected and stored and used in an appropriate way, which affects AI models.

[00:17:59] I [00:18:00] think the EU are probably going to be quicker to the plate if I’m going to use a US term. I believe the EU are more likely to regulate. I’m not sure when. I do think that has the possibility to slow down development and. I’m probably slightly concerned that the powers that be that make that decision probably are not the best suited to or don’t have that full understanding, that’s probably a concern of mine.

[00:18:22] However, you know, I think what’s more important to me, I think we go into kind of ethics and ethical AIs and building ethical AI solutions. I think that’s a lot more important and I believe we can live in the world where we don’t need laws for everything and regulations for everything. And maybe that’s, maybe, rose tinted glasses, slightly.

[00:18:42] I like to believe that, you know, society as a whole can be trusted to ensure that, in this example, that we can build solutions in an appropriate way that protects, protects society and protects our way of life. For example, with like bias models, you know, using inclusive and diversive data, [00:19:00] I think it’s vitally important and having bias detection techniques in place to make sure that your model is predicting and giving answers, answers that really are correct.

[00:19:09] And you have drift monitoring in place, for example, to make sure once you’ve built that model, that the model quality and accuracy doesn’t change over time. I think that’s vitally important. Going back to kind of, you know, having a cross functional input, I think it’s rightly important that different corners of the world, and we go back to that open source subject a little bit, is that we have lots of different viewpoints and types of people and frames of mind influencing this technology, and I think that will improve it, and that will go towards having less biased models, to be quite honest with you.

[00:19:42] Yeah, like I’m not a biggest fan of regulation. I’m not gonna lie. I do think it’s coming. I don’t think it’s coming soon, but I think it will really be around. If companies start taking advantage of AI, and I don’t think we’re seeing that yet, I think we’re seeing it as, you know, we’re [00:20:00] as kind of an approach that we’re trying to enhance people’s lives.

[00:20:02] We’re trying to make their lives easier, make it more efficient, and I think that’s a nice way to go. I’m pretty sure at some point that this technology will evolve and someone will figure out a way to somehow cheat the system in whatever way they figure out. And that might be the first kind of, let’s say, bad case where then someone realizes, okay, we need regulation around this.

[00:20:21] And I like to think that that won’t happen, but it probably will at some point. And I do think when the models get more advanced and I don’t know why, but politics comes to mind here. And I’m not really sure if I have a specific example, why AI and politics are coming in my mind, but I, I believe, you know, if we have models that, you know, are influencing kind of political environments or even kind of our climates.

[00:20:45] Or financial institutions, you know, we have a lot of high frequency trading models, which we class as AI. Like, if they start having a negative impact on society, that’s where I think things can go wrong. But I don’t think we’re close to that stage at this point, to be quite honest. Going back to kind of [00:21:00] minority report.

[00:21:00] Yeah,

[00:21:03] Luke: no, I totally hear you. I mean, it is a really interesting landscape right now. And I feel very similarly to you is that there’s there’s so many laws already and there’s already so much gray in a global kind of, you know, we’re building global software for global users. And there’s such a land grab at times.

[00:21:20] It seems with. Different national regulators, right. Where or trying to kind of like influence a global landscape and there’s enough laws, but, but I totally, I’m going to try to get too political or anything. It is kind of unavoidable at this time of year or this year or, or years or whatever. One thing that really jumps out to me is that there’s kind of, it seems like this perception or assumption that the bigger companies are going to do this better.

[00:21:43] And what I think has been interesting to me, and I’d love to get your take on it too, is just like how you’ve seen kind of like, no, not necessarily you know, and you see things like Gemini had some quality issues with some of the AI questions that they were getting with search and, and some of these other companies too, where you’re like, look like.

[00:21:59] Everybody’s [00:22:00] still learning this, right? Like, is that kind of what you’re seeing or is being too big in tech kind of like a anchor around the neck in a way for, for developing quality AI products for people to use at this point, from your point of view, I’d love to get your take on it.

[00:22:12] David: I think we’re at the very start of the maturity cycle, let’s say.

[00:22:16] And I think this is what plays into big tech’s hands at the moment because a vast amount needs to be invested in creating these models. And that’s where big tech has the advantage. But I do think that advantage will slow over time, to be quite honest with you. I think really, you know, the biggest players coming out of this is going to be the people who really identify the key uses of this technology in the end, and how it can really impact, you know, the everyday person’s life.

[00:22:42] And that might not be a Facebook, GBT, that might be, you know, a smaller startup that uses this technology, but finds out a really cool way of how to use it. You know, a story comes to mind actually that I can’t remember the man’s name, but the creator of the refrigerator actually didn’t make a hell of a lot of money from building [00:23:00] the refrigerator, which is in everyone’s people’s lives.

[00:23:02] But you know who did? Coca Cola. I believe Coca Cola, the single biggest organization that made money off this technology that at the time was the refrigerator because they learned how they could use it and they were providing a service and a product to the world. And I think that can be an example that you can build this most amazing technology, which can, you know, impact the world for the future, but you might not be the biggest gainer of it.

[00:23:27] It’s actually the biggest gain is the ones who impact people’s lives.

[00:23:31] Luke: That’s a great example. Like, I love that. I’m going to steal that by the way. It’s fantastic. I mean, the software companies, like there’s so much history there that people are just totally oblivious to. Like you find these, these stories in different pockets of the world, it’s, it’s really interesting.

[00:23:45] Is there anything that, that we didn’t cover today that you’d really love our audience to know about that’s top of mind, any areas that you’re working on that you’d love people to get more familiar with?

[00:23:55] David: One thing I will say is I’m hugely excited about the possibility of How [00:24:00] AI can really impact healthcare and climate change.

[00:24:02] You know, this is not my immediate scope, but yeah, like going back to how, you know, it can help society as a whole. I do believe, you know, these models and this technology at some point will get to a point where. You know, it will be able to start solving some of the world’s biggest problems, and I think those are two key areas where at the moment, you know, especially around climate change, it seems like, you know, all hope is lost a little bit, you know, we’re not going to really be able to make any impact.

[00:24:31] I don’t believe that’s going to be, let’s say, the story for, you know, till the end of time. I do believe technology will be able to find a way where humans couldn’t, to be quite honest with you. And with health care, you know, health care, I think. You live in the U. S., so I think you’re, if you have private healthcare, I think you’re quite privileged with the level of healthcare you get.

[00:24:49] But that’s not the case everywhere, and, you know, even in the U. K., for example, people are struggling to get doctors, dentists, even GPs, and You know, we can [00:25:00] see how even generative models can help there to be quite honest with it all the way going into kind of surgery and how maybe you don’t need to use more evasive surgery in the future where you can use robotics and the keyhole surgery, which is already kind of in place that, you know, minimize, you know how much time someone is recovering from a certain procedure.

[00:25:18] So I think, you know, from a human aspect, those are the two areas where I’m really looking forward to seeing carry on seeing advancements And hopefully, you know, impact the world for the better.

[00:25:28] Luke: I absolutely love that last bit too, especially about the medicals. I mean, this is something I feel like not enough people talk about either, right?

[00:25:35] Like we have this ability now where you could have models training on all the different types of cancer, for example, or all these different things. So where you’re currently relying on small cohorts of people or, big studies that things are bound to get missed. But like the potential here to really learn a lot more about these with large data sets, as long as everything’s captured ethically or whatever, seems like a huge potential win for humanity, [00:26:00] right?

[00:26:00] Where can people find you online if they want to learn more or uh, look up your work?

[00:26:04] David: Yes, you can find me on LinkedIn. That’s probably the only social media page I use on a day to day basis, I would say. So, yes, that’s my name, David J. Westrop, and there’s a J in the middle. But David Westrop, you’ll be able to find me.

[00:26:17] I’m more than happy for people to reach out. Really open to having kind of collaborative conversations. I really like learning from people to be quite honest with you, learning from people from different industries. So more than happy to have people come reach out with questions, want to have just a cool chat, learn more about Adidas, learn more about my work or vice versa.

[00:26:33] Yeah, that would be the place to contact me.

[00:26:34] Luke: Fantastic. Well, David, really enjoyed the conversation. Really appreciate you making it out. I know it’s late a Friday. So thank you so much for joining. And I’d love to have you back to, to kind of touch on some of these things in the future, too, as things move along, if you don’t mind.

[00:26:48] Yeah, no, sounds great. And let me know

[00:26:50] David: when you’re back here for a conference. We can meet up for coffee.

[00:26:53] Luke: Definitely, definitely. I’ll bring an oar and an inflatable raft just in case. [00:27:00] Thanks, David. Thanks for listening to the Brave Technologies podcast. To never miss an episode, make sure you hit follow in your podcast app.

[00:27:11] If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com and start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • The future of job roles in an AI-driven world
  • The importance of having knowledgeable tech talent to cut through the marketing noise and make informed decisions
  • How regulation can both stifle and spur innovation, emphasizing the need for ethical AI development
  • The future of AI agents and the impact they can have on our lives

Guest List

The amazing cast and crew:

  • David Westera - AI & Data Leader at Adidas

    David Westera is the AI & Data Leader driving innovation at Adidas. He specializes in helping companies achieve transformative growth, competitive advantage, and maximum profitability through AI- & ML-powered product deployment and integration. He’s spearheaded transformative AI and machine learning initiatives across various industries, including fintech, sports apparel, and banking.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.