The Future of Marketing and Creative Processes in an AI-Driven World
[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software.
[00:00:21] Makers of the privacy respecting Brave browser and search engine. Now powering AI with the Brave Search API. You’re listening to a new episode of The Brave Technologist. This one includes Daniel Hulme. Daniel is a globally recognized AI expert, chief AI officer at WPP, and the CEO at He has over 25 years of experience in research and applied AI.
[00:00:42] And it’s cited as one of the top 10 chief AI officers globally. Daniel is a serial TEDx speaker, has contributed numerous books and articles on AI, and is a faculty member of SingularityU. He’s passionate about how technology can help govern organizations and bring a positive impact to society. In this episode, we covered how synthetic audiences [00:01:00] can be used for marketing and content creation.
[00:01:02] both in creation and moderation, how these synthetic audiences can be used to help mitigate some of the privacy concerns and risks, and how the rise of agentic AI can lead to sentiency and how morality can kind of be introduced potentially earlier into the process to help avoid conflicts and mistakes.
[00:01:19] And now for this week’s episode of the Brave Technologist. Daniel, welcome to The Brave Technologist. How are you doing today? Very good, Luke. How are you? Good, good. You think you’re the first chief AI officer I’ve ever interviewed so far. So, I’ve been looking forward to this one, man. You know, you’ve got, you know, 25 plus years in AI, you know, what initially drew you to the field of AI?
[00:01:43] how has your perspective on, on it evolved over time?
[00:01:47] Daniel: I think, you know, when you start to, you know, Ask questions about, you know, what does it mean to be human? What’s consciousness? Can we migrate consciousness to machines? Can we build machines that are conscious and all that kind of stuff? I think you can’t help, but kind of fall [00:02:00] into the field of AI.
[00:02:00] And I think, you know, back in the day, I think physics was, was probably the closest subject, but I was very fortunate that my undergraduate was in what was called computer science with cognitive science. So it was actually AI. 25 years ago.
[00:02:14] Luke: Nice. I know people often don’t realize just how long AI stuff’s been around too.
[00:02:19] It’s kind of like, you’re like as if, Oh, it’s just chat GBT. And that’s that kind of thing. What are you up to now? Like a WPP and what, what are you kind of most excited about in the space? I guess, you
[00:02:29] Daniel: know, when I did my PhD, I was very interested in brains. I was modeling Bumblebee brains 20 years ago. We couldn’t really model big brains.
[00:02:37] And so I got interested in a different subject, optimization. And for my sins, instead of staying in academia, I started a company that essentially applies algorithms, AI algorithms to solving problems across supply chain. So I started the company 16 years ago, three years ago, I then sold to WPP. so I continue to be the CEO of that, that original entity.
[00:02:55] think of them as being WPP’s deep mind by, I’m now responsible for [00:03:00] coordinating AI across about 120, 000 people, which is good fun. And, you know, thinking about not just how does AI. Accelerate transformation for WPP, but how does it completely disrupt the media marketing communications industry? And WPP can be a lot of, a lot of freedom.
[00:03:16] So I’ve just recently started a company with the support of WPP to, to lead into big questions, like what does it mean to build a conscious machines?
[00:03:24] Luke: Awesome. That’s fascinating. I mean, that’s, it’s quite a menu of, opportunity there too. I would imagine that you’ve got in front of you and things to try out.
[00:03:31] We were just at that was just at the AI summit in New York last week and was interviewing folks from all over the place. Like everybody from like Nordstrom to like PayPal, et cetera. But you’ve got like 80 plus years supply chains that are getting disrupted by this stuff in the, in the garment making world.
[00:03:45] It was, it’s pretty wild. Like how deep everything’s gotten with people trying to apply AI. to, you know, business models. It’s awesome. Yeah. Like I said before, I haven’t interviewed a chief AI officer before. Why don’t you unpack a little bit? What does this mean to [00:04:00] you? This title?
[00:04:01] Daniel: Yeah, I guess I think very deeply about, about this because I get this question quite a lot.
[00:04:06] I think. Some industries are relatively immune to the disruption of AI. If you’re growing things, if you’re distributing goods, if you’re manufacturing products, those are not necessarily going to be disrupted by generative AI. There are other algorithms that have, you know, matured in academia over the past several decades that are very good at solving those types of problems.
[00:04:22] But there are some industries that are going to be and are being completely disrupted. Media marketing communications is one, which is the industry that I now Find myself in, and I don’t think it’s good enough for CIOs, CTOs to sort of just kind of get up to speed with, with what’s currently going on and then, and then think about the impact that that has on the business.
[00:04:39] my CTO, Stefan Pretorius, the CTO of WP is phenomenal, but I think there’s something about having foundational knowledge and understanding about these technologies, real appreciation for what the technologies can do now, what they’re right at solving the trajectory in terms of. How we think they might play out so we can place the right bets, but [00:05:00] also obviously somebody credible in the field that, you know, I’ve been involved in AI for over 25 years.
[00:05:04] So been exploring big questions about its impact on business, its impact on society, and just having really a spokesperson as well as thinking about the safe and responsible use of these technologies. So, so I think for media marketing communications, it does need somebody. that has deep, deep domain experts that just rebranded themselves as an AI expert over the past three, five, seven years.
[00:05:26] I guess I use this example quite a lot, which is if you want to strategize about your legal future of your organization, Would you hire somebody that’s just passed the bar and has a few years of experience? Now, it takes seven years to do the bar and, you know, a few years experience doesn’t give you enough to really appreciate how the legal system is going to evolve.
[00:05:46] And, and so I think engaging with people that have decades of experience, that have been applying these technologies in production for many, many years, I think are going to allow organizations to build differentiated solutions.
[00:05:59] Luke: No, it’s [00:06:00] awesome. No, it’s great. There might be organizations too, where some people might be hesitant to adopt AI.
[00:06:05] Do you have any pointers, kind of, for folks in those organizations, or any advice, you know, people that might be kind of either, maybe they’re just generally skeptical, or maybe they’re legitimately kind of concerned about this new technology?
[00:06:16] Daniel: Well, I think there are a few things to unpack here. One is a lot of people think AI is generative AI, and it’s not.
[00:06:23] There’s many different types of AI. Algorithmic technologies that have been developed over the past 70 years that are really good at solving problems. What typically happens is the industry get very excited about an emerging technology. You know, eight, nine years ago, it was machine learning. It was data science.
[00:06:37] So people would be building data lakes and putting Tableau or some sort of analytics layer on top, hoping that then by extracting insights, it will lead to better decisions. I think the next, the current big thing is generative AI. And whilst generative AI is very, very powerful, the problem that I see is that people think that they can apply that technology to solving any problem across our organization.
[00:06:57] And of course it can’t, it’s not a panacea to [00:07:00] solving all problems. So understanding the nature of your problems and understanding what the right algorithmic techniques are to apply to that problem is crucial. Now, the point here is, is that actually I’ve developed, you know, I’ve built an entire career around, around defining AI and, Helping people understand what AI is and isn’t, but I’ve developed a framework to think of AI, not through definitions or through technologies, but to think of AI through applications.
[00:07:23] And I always advocate for organizations to start with what’s my problem or what are my frictions exist across my organization? And then how do I apply the right solution to solve that problem rather than getting seduced by emerging technologies and hoping that’s going to solve all of your problems?
[00:07:38] I think ultimately. Applying the right algorithms in the right way will allow organizations to be more efficient, more effective, right? Now, what organization doesn’t want to be more efficient, more effective? But now people are realizing the power of AI, they realize they can also completely disrupt their industry.
[00:07:54] So, so that’s also a big question they need to be leaning into.
[00:07:58] Luke: Yeah, no, I think, I think it’s great. That’s [00:08:00] great advice. From your point of view, what problems do you see AI as like being like really poised well to disrupt, you know, in your world right now, like, or have the most significant impact on innovating around?
[00:08:12] Daniel: If you look at media marketing communications, then we’ve used machine learning to predict activations of clicks, like sales, we use optimization algorithms to Allocate content across channels. What generative AI has unlocked is two very important things. One, the ability to create content incredibly quickly.
[00:08:29] Historically, if you wanted to create an ad, it would take weeks, months to create a production grade ad. Now it can take seconds. So going from weeks to seconds is a massive disruption, but what’s also important about, about what allows language models have locked is not just the ability to create content.
[00:08:46] It’s the ability to understand how people perceive content. And now what we can do is we can test content against synthetic audiences to see how they think and feel about it. That now gives us access to signals. which we historically [00:09:00] haven’t had access to, to now create better content, but also now more accurately predict activation.
[00:09:05] So large language models has essentially hypercharged our ability to create much more effective ads.
[00:09:13] Luke: Okay. That’s super interesting. Can we dig in a little bit on, when you say synthetic audiences, like what does that mean? Yeah. So,
[00:09:20] Daniel: if I show you an ad or a policy or some promotional material or any experience historically, I didn’t know what goes on in your brain and body.
[00:09:28] And unless I ask you and people are not very good at reporting on what goes in their mind and body, but for the first time ever, we can build. Large language models and augment those large language models with data to be able to recreate how audiences perceive things. And there’s been a history in marketing where, like, people have tried to collect, like, names, addresses, dates of birth.
[00:09:49] As far as I’m concerned, that data is a proxy for who you are. It’s not really who you are. How you perceive something depends on whether you’ve just fallen in love, how much money you [00:10:00] have in your account, whether you’ve just eaten, how well your soccer team played at the weekend, time of the day, understanding the data you need to build a representative audience is, I think, a differentiator.
[00:10:11] That’s actually one of my big focuses over the next two years is how do we identify those data sources? They might be Reddit. It might be very specific data sources or how do we get AI to Infer and create new data about audiences that we’ve never been able to know before.
[00:10:27] Luke: Yeah, that’s super interesting.
[00:10:28] I was literally going to ask you that the next question, because it really kind of gets me wondering, like, are the signals that you look for different? Is it a different type of data that you have to work from? Then everybody’s used to just kind of like, okay, run a thousand tags on people and cohort mapping and all that stuff.
[00:10:45] What’s the second, third order effect of seeing something and then acting on it, right? Yeah,
[00:10:49] Daniel: I think I think there are two things here. One is that, you know, you can get access to data sources that allow you to luckily identify signals in terms of how people behave. But I usually turn the [00:11:00] question around, which is if you had.
[00:11:01] The graduates sitting next to you, which is what a bit like a large language model is. What data would you get them to go and consume to understand a culture or a minority group or a political party or a newspaper or a particular individual? What data would you use to be able to understand how that works?
[00:11:21] Group of people think and feel about things. And, and the reality is, is it, it’s a very hard problem to
[00:11:26] Luke: solve. Well, and it’s super interesting too, because obviously at Brave, like we’re super privacy focused on everything, but we’ve had to kind of think around, okay, what do you actually need serve an app?
[00:11:36] Right. You know, and so much data gets collected, right? There’s like a personally identifiable whatever, but it sounds like kind of with this new approach that you’re, you guys are digging into, you don’t necessarily need it. You don’t need that. No, right. Right. Yeah.
[00:11:50] Daniel: There’s a, there’s a very nice meme or a slide that, that knocks around the internet every now and again, which is a picture of Prince Charles, King Charles and a picture of Ozzy Osbourne.
[00:11:58] And it, and it says, look, they’re the [00:12:00] same age. They live in a castle. They’re famous. Their demographics are very, very similar, but the reality is they are two very different people.
[00:12:09] Luke: Totally, totally. That’s awesome. Yeah, there’s obviously like some, some big kind of impacts. You know, there’s micro impacts, there’s audience impacts like we’re talking about.
[00:12:17] And then there’s more like kind of societal shifts too. Like what societal shifts do you kind of foresee as AI becomes more and integrated in daily life across the board?
[00:12:26] Daniel: I think maybe the audience might have heard the term singularity, singularity comes from physics is a point in time that we can’t see beyond.
[00:12:33] And it was it was adopted by the AI community to refer to the technological singularity, which is where we build a brain a million times smarter than us. I actually think there are six singularities. And I’ve tried to. Use the PESTEL framework to capture them. But just very quickly, we are potentially having, approaching a post truth world, a world where we no longer know what is true.
[00:12:56] We don’t know if this, this thing that we’re engaging with is real or not. [00:13:00] And that could be very problematic. I actually think that we can mitigate the risk of a post truth world using AI. That will have social impact. If people lose trust in content that they are experiencing as we get better understanding how people perceive things and how do we, how we get better at influencing people and AI is enabling us to do this really well.
[00:13:22] Now that’s an incredibly powerful position to be in. If you can influence people, really, really influence people, then we need to make sure that obviously we’re doing that safely and responsibly. We also need to mitigate the risk of. Bad actors from using that technology to accumulate more wealth more power.
[00:13:37] There is obviously the impact of, AI on, on jobs. You know, for the past 16 years, my company has been building AI solutions that have been freeing people from repetitive structured tasks. Those people have not lost their jobs. And I think that for the next 10 years, we’re going to see a Cambrian explosion of the same.
[00:13:51] New innovations will be created. People will have more opportunities to contribute. AI is like an energy source that will allow. Humanity to grow. I think beyond 10 years, [00:14:00] nobody knows what they’re talking about. And , maybe just to give you the two extremes of the argument that, that, that people might have, have been exposed to.
[00:14:05] One is that if organizations can free uphold jobs, we probably will the du you know, the pressure to reduce cost, increase profits. if that happens very quickly, our economies might not be able to rebalance and it could lead to social unrest. There is, and I think we need to be leaning into that problem.
[00:14:20] and, and thinking about four day working week and UBI, there is, there is a, you know, the direct stream. Which is if we, if we automate or remove the friction from the creation and dissemination of goods, healthcare, education, energy, transport, food, by using AI, we can remove so much friction that those goods essentially become free.
[00:14:41] So imagine being born into a world where you don’t have to do paid work, but everything you need to survive and thrive as a human being is abundant. And I think that if we got our timing, right, we could create a world where People are completely economically free to free to do whatever they want. So, you know, there’s a lot of concern about the impact of AI on jobs.[00:15:00]
[00:15:00] I think actually people do want to work, but we don’t always get to work on the things that we want to do. And AI potentially could unlock that opportunity. AI also is advancing medicine and, you know, and in theory. According to some academics, there are people alive today that don’t have to die. And I don’t know what the world will look like when we realize there are people amongst us that won’t have to die.
[00:15:20] So essentially AI is allowing us to face some very big questions as a species over the next few decades. It’s a very important, exciting time to be alive. And I want to make sure that we’re on the right side of those decisions.
[00:15:33] Luke: I’d love to drill down into the first point you kind of talked about there, where around, you know, if people don’t know what’s real, what’s true anymore, and you mentioned that you see ways for AI could help mitigate that risk.
[00:15:44] Like, what is the kind of lowest hanging fruit from your point of view on this? And also, like, the second kind of point to that is like, you know, how are you guys thinking about the impact of that on, I mean, advertising is obviously one thing and marketing, right? But like, you know, more broadly. Are there tools in place [00:16:00] you guys are exploring now, ways to kind of help to prove authenticity or something like that?
[00:16:04] I’d love to hear more of your take or dive a little bit deeper on that because, you know, yeah, like it seems like a pretty topical item.
[00:16:11] Daniel: Yeah. Well, it actually ties back to this concept of audience brains. So I guess historically, if you’re posting things on social media, they just haven’t had the, either the.
[00:16:21] The need, the desire, the resources to be able to moderate every single piece of content. How can I determine where is this piece of piece of content coming from? Is it coming from an authentic source or not an authentic source? So I actually think that what we can do is build brains, audience brains that represent every corner of society, political parties, newspapers, minority groups, even things like food compliance claims, structures, ad complaints, sustainability claims.
[00:16:49] So what you can essentially do is create this sort of council of thousands of representatives, so that when any content is pushed out there, I would argue that what it needs to be done is shown [00:17:00] this council and the council can then, you know, highlight where it might trigger a certain community, break any laws or cause any harm.
[00:17:07] And if it reaches a certain threshold of harm, I think that that content needs to be authenticated before it’s even pushed out there. So I think there’s a way of using AI. To allow us to moderate and then authenticate content.
[00:17:21] Luke: Yeah, super interesting. I mean, like, yeah, there’s so many other hamster wheels kind of spinning on this one just because, like, I mean, so if you guys kind of see, like, Building out the system with these AI brains and then making that something like APIs people could plug into or something like that at the end game?
[00:17:34] Yeah.
[00:17:35] Daniel: As far as I’m concerned, this should actually be part of the role of the government. I think there should be a, there should be a mechanism to make sure that you can test content. Now, maybe there are commercial models there. I think that that’s a very powerful thing to help mitigate the risk of misinformation, but it’s also powerful for, you know, making sure that you’re not triggering any communities or causing any harm.
[00:17:56] Luke: Yeah, no, that’s great. Yeah. There’s a lot of things happening on the regulatory side. it [00:18:00] seems like a lot of like education or knowledge building. What’s your take on kind of the impact of current regulatory efforts? Are they inadequate, adequate, or are you seeing, you know, anything, is it getting in the way?
[00:18:11] Like kind of just curious your take. I think it must be
[00:18:14] Daniel: really. Yeah. Nice and heartening is absent of regulation. Big organizations have been regulating themselves. They’ve really been thinking about the careful and safe and responsive. Certainly the WPP, right? With that absent of regulation, we make sure that we’re trying to create tech, using technology, so not violating copyright, that we are making sure that data isn’t leaking, that there were.
[00:18:37] Building safe and responsible ads using these technologies. So I think organizations have been self regulating. I don’t think that’s the answer. I think we do need global regulation, whether the global community are going to agree on that regulation is a different matter. I think, you know, the rhetoric is, is that Europe tends to be.
[00:18:56] Overly regulated, the US, you know, much more innovation, [00:19:00] or orientated. I think we, we won’t know what’s right until we start to see real harm from these technologies. Now that that’s unfortunate. It’s unfortunate that cars have to crash before you put seat belts in them and things like that. So I think regulation is absolutely needed.
[00:19:16] I think, unfortunately it won’t come until we see real use cases for it. I think that makes a lot of sense.
[00:19:23] Luke: I mean, it’s all over the place. If you look at how kind of like, you know, privacy regulation was rolled out, it was very, there was a ton of market fit already with advertising and, things were well established there.
[00:19:32] And it seems like AI is still trying to you know, people are still discovering the fit. Right. And, it’s hard to kind of regulate until you even know where the car is going to drive. Right. Like, but, yeah, I, I’m super curious too. I mean, like we’re seeing lots of innovation on, on content creation.
[00:19:45] like you were mentioning, right. there’s still obviously some gaps. how do you think that that’s going to influence people? I mean, I, this is anecdotally, right? Like you could tell like a bot reply on Twitter, like, or whatever, you know, like Years ago, right? Like, but now it’s almost [00:20:00] like you’re starting to see people say, Oh, I can tell that’s a chat GPT output, right?
[00:20:03] Like answer or whatever. Like, how much do you think it’s going to be kind of like a tit for tat thing on the content side? Like, and, and how much are of the human element do you think will be, you know, kind of abstracted away?
[00:20:13] Daniel: I think content creation is going to become excellent with regards to AIs.
[00:20:19] I think, assuming that we could solve end to end marketing and create ads, you know, incredibly quickly within, seconds for the relevant moments, relevant people, I would argue the only differentiator is human creativity. It is, it requires creativity to differentiate one piece of content from another.
[00:20:34] That’s what makes things stand out. I’m really interested in leaning into the question of how do we get AIs to augment, enhance, accelerate creativity. That’s going to be my biggest focus for next year. And I would argue there are, I call it the Monet problem, which is how do you create a Monet, as in, let’s say, a Monet rendition of a, Crocodile standing on a bubble.[00:21:00]
[00:21:00] And how do I do that in a way that best represents that depiction in the style of Monet? The other question I ask myself is how do I create Monet? So I don’t know how do I create at Monet, I create Monet? How do I come up with impressionism, a new genre? I actually think that we can use AI to do both of those things, which I think is very, very exciting because I think it will allow us to push the bounds of creativity For human beings to enhance themselves and explore.
[00:21:26] Luke: Are there any areas like, within WPP that you’re seeing this innovation that, are kind of blowing your mind right now, or you want people to look at?
[00:21:34] Daniel: Oh, yeah. I mean, we, we’ve developed a platform, an end to end platform that we call the WPP open that’s powered by AI. And I think, you know, we, it does the whole marketing stuff around, you know, figuring out activation and pushing content across channels, but what it has allowed our teams to do is just ideate more.
[00:21:51] test those ideas against synthetic audiences. So what we’re seeing is a massive improvement, not just in the quantity of content, but the [00:22:00] quality of content. And I’m actually excited for the next 10 years where we’re going to see much better quality ads. And, if you, you know, think about the ads that, you know, have resonated with you.
[00:22:11] They’re not ads that are necessarily personalized to you, but they’re ads that trigger and tap into shared moments as a species. And that’s what really I’m excited about. I’m excited about how brands can use advertising to promote their, their purpose, to promote the positive side of their products and, and how it can help people feel much more connected.
[00:22:33] Luke: It’s awesome. Yeah. Cause it seems like it’s such a ripe area for, you know, innovation right now where you’ve got kind of like this huge, like oversaturated people, the, the ad foremost that have been out there have been out there forever, right? Like in, in people are, I’ve seen like way too much of them.
[00:22:47] The mix of like, kind of how brands can kind of become more like, like larger tribes, but actually like kind of have more, more stake in things. And then, and then how do they can kind of, you know, connect on this new level. It’s just, it’s, it’s awesome. Yeah. It’s very [00:23:00] exciting. Yeah. Yeah. And we’ll be sure to link that to you in the show notes.
[00:23:03] What resources would you, you kind of suggest for people that, you know, let’s say they’re, maybe they’re working in marketing or maybe they’re there, they want to get more involved in AI, but like, don’t know where to go with it. what would you suggest? Well, first
[00:23:16] Daniel: of all, you know, there’s tons of material of me harping on about this stuff on the internet.
[00:23:20] So also feel free to reach out to me. I’m very happy to, to engage with people. I think that the real thing is to play around with it and, and, and, you know, not go and read about it in books, but go and actually. Play around with it, use it and try to integrate it into your day to day work and then try to get an appreciation, you know, as the months go by about how smart the technologies are going, because they, they will evolve and they will get smarter.
[00:23:44] So you get an appreciation essentially of the trajectory, but don’t go and read about it. Just go and use it and start to bring it into your work. That sounded, sounded
[00:23:53] Luke: advice, all of the noise out there, concern, trolling, all these things, like what area are you actually like [00:24:00] legitimately worried about in this space right now that based on what you’re seeing, you know, that you think we need to really kind of tackle.
[00:24:06] Daniel: I’m, I’m worried about agents. So, you know, we’re starting to now give AIs what is called agency. So it’s now called agentic computing. I think, I think there are misinterpretations of agentic computing, but, let’s assume the one definition is giving agents. The ability to have agency make decisions, purchasing power, et cetera, et cetera.
[00:24:24] And once we start to give AI’s agency, then my guess is that those AI’s are going to start to make mistakes and now they could be immaterial mistakes or they could be material mistakes. And so I’m very interested in leaning into the. Big question of how do we actually align AIs with moral systems? Not how do we build big, smart brains and play whack a mole to try and make sure they don’t do bad things, but how do we actually integrate and embed in the sort of DNA, the neurons of an AI, moral behaviors?
[00:24:56] And actually, that’s one of the reasons why I’ve started Concium, [00:25:00] which is trying to understand machine consciousness.
[00:25:03] Luke: Oh, you want to dig into a little bit more on that? I mean, I’m sure people would love to
[00:25:06] Daniel: hear about it. Yeah, I mean, we can totally dig into it. So, you know, Consium is actually invested in by WPP, and it really is trying to solve two big questions.
[00:25:15] One is, how do we align AIs? And the second is, how do we prevent what is called mind crime, which is, sounds very science fiction y, but if we end up building machines that have autonomy, that have agency, it’s possible that as those machines adapt to the world, they become sentient. And we have a duty of care, not just to human beings and animals, but we also now have a duty of care to potentially AIs.
[00:25:40] And so we, Consium is, is going to be making Available technologies that are now emerging from academia called neuromorphic technologies. I won’t bore you with the details that are most likely going to be the next generation of AI. Large language models are incredibly powerful, but they’re slow.
[00:25:55] They’re massively energy inefficient. They require lots of data to learn. They’re not very [00:26:00] adaptive, new emerging technologies that are predicating much more on how our brain works are going to unlock the next generation of AI with that unlocking of that next generation of AI, which are robots and agency.
[00:26:11] Also raises big questions about sentience, about, about consciousness. So I actually think that this next decade is going to be the decade of AI consciousness. And there’s lots of things that we can learn along the way, which is one of the reasons why WPP is invested in it. But it’s a, it’s a, again, a very exciting decade to be alive.
[00:26:28] Luke: That’s awesome. That’s such an interesting, interesting point. We covered a ton here. Like, I’m super grateful for you coming on and sharing so much. It’s been an awesome conversation. Is there anything we didn’t cover that you think our audience should know about? I don’t know. I
[00:26:42] Daniel: think, you know, there’s a lot of scaremongering, misinformation associated with AI.
[00:26:48] And I tried, you know, reason why I do these podcasts and reason why I do a lot of public speaking is to try to close that gap, to really help people understand what these technologies can do, what they can’t do. And when you demystify them, people become less scared of them. [00:27:00] That said, you know, this is probably the most powerful technology that humanity has ever created, and it will, it will create change.
[00:27:07] And I think leaning into this as an individual, as a species is going to be very important. And the more people that we have thinking engaging. With these topics in a meaningful way, and I’m not just kind of not just regurgitating what they get told on LinkedIn, but engaging them in a meaningful way.
[00:27:23] I think that that will allow us to steer towards the better outcome of the use of these technologies.
[00:27:30] Luke: I agree, man. Daniel, this has been awesome, man. Where can people follow along if they want to follow your work and reach out?
[00:27:37] Daniel: My email address is daniel. hume. com or daniel. hume at WPV. com. But just You can find me on the internet.
[00:27:44] yeah, happy to keep the conversation going.
[00:27:46] Luke: Awesome, man. Well, again, I really appreciate you making the time for this conversation today. I’d love to have you back to, to check back in on how things are going over time too. But, yeah, thanks for, thanks for dropping by Daniel. It’s been really great.
[00:27:56] Anytime Luke. Thanks. Thanks for listening to the Brave [00:28:00] Technologist podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com and start using Brave Search, which enables you to search the web privately.
[00:28:13] Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.