Back to episodes

Episode 40

The Long-Term Societal Impacts of AI: A Holistic View of Opportunities and Challenges

Lambert Hogenhout, Chief of Data and AI at the United Nations Secretariat, explores the delicate balance of embracing technological advancements while maintaining our authentic human identities. We also discuss the benefits of open-source models for global equity, and the crucial intersection of data privacy and AI.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Maltz, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:25] You’re listening to a new episode of The Brave Technologist, and this one features Lambert Hoganaut, who is the Chief of Data and AI at the United Nations Secretariat. Lambert is also an author, keynote speaker, and advisor on AI and responsible use of technology. He has 25 years of experience working both in the private sector and with international organizations such as the World Bank and the United Nations.

[00:00:47] In this episode, we discussed the ever evolving relationship between data privacy and AI, Organizational readiness, how organizations can ensure that their AI systems are designed and implemented with ethical considerations using [00:01:00] safety by design principles, building trust amongst your team and community through transparency and responsible AI, and the societal implications of technology, maintaining our own authentic identity as humans while the adoption of AI becomes mainstream.

[00:01:13] And now for this week’s episode of The Brave Technologist. Lambert, welcome to the Brave Technologies podcast. How are you doing today?

[00:01:20] Lambert: Very well. Thanks for inviting me.

[00:01:22] Luke: Yeah. Thanks for coming. I’ve been really looking forward to having this discussion. Let’s kind of set the table a little bit for the audience.

[00:01:28] Can you let us know a little bit about how you ended up doing what you’re doing and how you see your work is important to affecting the world around all of us?

[00:01:37] Lambert: Sure. Yeah. So I oversee data and AI at the United Nations Secretariat. And I came into that role gradually. I think I worked, I’ve been with the UN for a long time.

[00:01:50] I worked with humanitarian affairs for about 10 years, providing all the IT for humanitarian operations around the world. And then at some [00:02:00] point I moved to the central IT division. And one of the first things I did there. That was about 10, 12 years ago was to introduce data visualization. And it was the first time for many people to see that idea of, getting insights from data.

[00:02:16] You know, before long, we were going through a digital transformation, like many other organizations at that time, and data was always key. So we started focusing more and more on data. And as technology matured and as we matured as an organization, I think we introduced more advanced analytics techniques, NLP, natural language processing, and machine learning, ML, and with varying degrees success.

[00:02:44] I think NLP was very relevant for us because we had a lot of documents. So we’re a document heavy organization. So that was kind of relevant. The machine learning, we’ve got sort of a lukewarm reaction, especially when people discovered that they have to clean up their [00:03:00] data and was like, no, no. We’re not going to do that.

[00:03:04] And then I also oversaw the innovation team and the emerging tech team. And then at some point, and especially with generative AI coming on the scene, everything came together. And everybody, you know, after the chat GPT moment, everybody was suddenly interested in AI. So now we had a carrot for people to work on their data as well.

[00:03:27] And that’s how the, data and the AI and the emerging tech and the innovation really all came together. And it’s all. One thing now, obviously it’s, it’s going to have a huge impact on the world in the coming years and for the United Nations as well. There’s lots and lots of opportunities, also some concerns, but I think it’s, it’s very important that we, get our heads around it.

[00:03:50] And that we use it to our best advantage.

[00:03:54] Luke: Yeah, speaking to that, the concerning element, there’s a lot of talk about responsible [00:04:00] AI and what that means and why it’s crucial. And I’m wondering if you could help educate our audience a little bit about, from your point of view, like, how do you see the concept of responsible AI and how do you see it applying kind of to the landscape?

[00:04:13] Lambert: It is one of the first things we thought about as an organization, we try to make the world better. So in the process of adopting technology, AI or any other technology, we make things worse that really defeats the purpose, right? Right. So to me, Using AI in a sustainable way. and with that, I mean from a technological, a social and environmental perspective and creating, I like to word, to use the word wholesome, and I mean holistically beneficial, not in a narrow sense.

[00:04:46] And I think often people have a particular objective in mind when they develop technology, including ai. And they don’t look so much at the side effects of it or the long term effects. And to me, responsible AI [00:05:00] means doing that, looking at it holistically, looking at the side effects and the long term effects as well.

[00:05:07] Luke: And I would imagine from your vantage point, you’re in a unique position to be able to kind of have a lot of different inputs or at least points of view. I would imagine into seeing different use cases and how these things can be applied in ways that. to broaden it a bit, you know, to kind of build on what you were saying about sometimes things that are being too narrow an objective or a focus when people are building the things.

[00:05:28] Are you seeing a lot of interesting applications of AI in ways that you maybe didn’t necessarily anticipate that is motivating you to help provide guidance in your work right now? Or, is it a pretty early phase still with these things?

[00:05:42] Lambert: A couple of things. First of all, yes, there is. Lots and lots of opportunities, and we can get into, some of those, those particular opportunities.

[00:05:50] I think it is also changing very rapidly. I mentioned NLP and and ML before, but now with, generative AI [00:06:00] systems becoming much better at both interpreting content and not only text, but multimodal. Content and producing that content. We have a lot of new roles for AI, where AI can be the interface between humans and what happens behind the screens.

[00:06:19] And that can be AI, or that can be traditional applications or, you know, retrieval from databases or doing things like booking airline tickets or ordering a new Chair for my office. So I think there’s a lot of change happening. And I think that’s one of the things that is hard for people to understand what they can do with AI and what isn’t going to be obsolete again, four or five months from now.

[00:06:45] I think a lot of the things that we built four or five years ago. They’re completely obsolete. Now, we started building chatbots using Microsoft bot framework and these things. you know, this was in the time when all these devices like [00:07:00] Alexa and Google home and, others, were popular as well.

[00:07:03] we built that, in the past year, all of that technology has become completely overtaken by the. Capabilities of generative AI. Now we can say, should we not have done that? Should we not have invested in that? But I think we need to, and I think we need to realize that whatever we’re doing, a lot of it is going to be obsolete two years from now.

[00:07:24] But I think what, what remains is the. Organizational readiness. There’s a lot in processes within the organization and in people’s thinking that gets changed through these AI applications. And I think so the organizational maturity. is a benefit that remains.

[00:07:41] Luke: That’s a really, really great point.

[00:07:44] The one that people might not necessarily expect to hear either, you know, the organizational readiness. Yeah. Things might be obsolete, but the best way to learn with a lot of these things is by doing right. And so if you’re not in there doing them, and I think that’s a real kind of fear that a lot of people have when they look at like these large [00:08:00] bodies.

[00:08:00] Whether it’s regulators or things that they are familiar with in a different context that are getting into these new areas. So it’s super cool to hear that you all have been really, you know, hands on with the tech that you’re learning as you go with it. Because I think people don’t necessarily, maybe it’s because it’s not on their radar necessarily, right?

[00:08:15] Like they’re not there, but it’s super cool to hear you all are this organizational readiness. You’ve been preparing the org kind of for moving in this direction. I want to throw in another piece here too, because when I speak with a lot of people from the EU and, but privacy is becoming more and more of a global issue, right?

[00:08:30] How much is the data privacy discussion? How much of that are you seeing in the work with AI? Are you concerned about it? Is it something that. You’re thinking about two with, regards to, you know, the application of AI.

[00:08:42] Lambert: Yeah. Well, I, came from the field of data. So before AI became the number one discussion topic in every meeting, I focused a lot on data privacy and I co chaired a working group at the UN that’s established the principles for the protection of personal data.

[00:08:59] It’s [00:09:00] public, you can, look it up, in the old days I would say Google it, but now I might say, ask, Claude or Perplexity or your favorite LLM. But, yeah, it’s public. And I think that was very fortunate that I had that background because AI and data privacy are very much intertwined.

[00:09:16] I see two trends happening in the world in the past couple of years. So on the one hand with advanced analytics and now AI and our generative AI, we can do so much more with data and more data is accessible to us. Increasingly AI can just gobble up raw data and use it. So there’s more data and we can do more with it.

[00:09:37] But then the second trend that is happening is an increasing concern about privacy. And I see that as a cultural shift that is spreading across the globe, which manifests itself in, among other things, in increased regulation. And that started with data privacy laws, GDPR in Europe, the PIPL in China, [00:10:00] LGPD in Brazil, the POPIA in South Africa, PIPEDA in Canada, and here in the U.

[00:10:05] S. It’s sort of more fragmented state, like in California, or by industry. I mean, Like the HIPAA for healthcare and the TLBA for banking. So you have all these data privacy laws, and I think it’s now happening with AI regulation as well. Starting again with the EU’s AI Act, various other countries are starting to implement AI regulation.

[00:10:26] So think when they’re creating these regulations. If you read these laws, they refer to each other quite a bit in the AI laws. You will see references to the, the data privacy laws, because I think these things are very interconnected. Obviously there’s concerns about large LLMs gobbling up all the data, including private data, people creating rag models for information within their organizations.

[00:10:53] Wanting to expose some of that to customers, but maybe not always controlling how [00:11:00] much these customers can jailbreak the system and get other information out of it. How much data leakage will there be? And I think as we move to different paradigms of AI in the coming years, if we’re going to use agentic models.

[00:11:13] Where there’s many agents in an organization that help each other controlling what that particular customer facing or public facing bot has access to, which other agencies can mobilize, and what kind of information it can disclose or be enticed to disclose is going to be quite tricky.

[00:11:32] Luke: yeah, definitely.

[00:11:32] No, it’s fascinating. You mentioned accessing earlier too, and I ask people this question a lot, but I think you’re a really interesting person to ask it where I’m like, how much are you thinking about just general accessibility to the technology? are you seeing a good amount of global adoption or are you concerned with certain parts of the world not having access to the tooling your take from your vantage point on this topic of accessibility?

[00:11:58] Lambert: Well, I think there’s two sides to [00:12:00] it. First of all, we’re all becoming a digital global village. So if something is available online, it’s available to everyone. And I think that’s, good that the, access is becoming wider, but it’s undeniable that the places where all that technology resides, the who own the IP and who own the tech is concentrated in a few places on earth.

[00:12:21] And, you know, I think that’s an, unavoidable reality. But I do think that it’s, good for countries and regions to embrace AI and to create their own local centers of excellence, partly to prevent brain drain, giving people locally the opportunities to work on interesting problems online. Also from, from a cultural perspective, I think it’s, good that there’s a multitude of models being developed.

[00:12:50] Luke: Yeah. How can organizations kind of ensure that their AI systems are designed and implemented to operate with ethical considerations from the onset? What’s your take on [00:13:00] how that could be implemented a way that would be useful?

[00:13:03] Lambert: Well, if I had to give an answer in three words, it would be safety by design.

[00:13:07] Yeah. You know, that’s what it comes down to. Similar to privacy by design. But I think for AI, it hasn’t been standardized or crystallized that way. Not much yet. There’s different variations, but I think for organizations concerned about that, I think there’s, there’s a couple of things to, to keep in mind.

[00:13:25] First, the fact that there’s different layers to AI. There’s the development of the models where the ingestion and use of data is very important and you worry about things like bias and data sets. and general quality of the data. And then once you have such a foundation model, there’s the design of the application.

[00:13:45] What are you going to do with that model, right, to build an AI application? And that’s where things like fairness and other risks need to be considered. Things like an AI impact assessment is quite common and useful in this space. [00:14:00] And then you have the deployment and runtime, where things like observability are important.

[00:14:06] Monitor your outputs, get feedback from users, monitor for abuse of systems like jailbreaking. And some of the laws, the regulations like the AI Act distinguish between these different layers, and assign them all different accountabilities. But I think as an organization concerned about Doing AI in the right way.

[00:14:27] I think it’s, useful to look at that. And then there’s best practices, like, you know, if you have a lot of AI development in your organization, standardize the approach, do independent checks, learn from the defects in your systems.

[00:14:42] Luke: Let’s dive into the, safety element, because I think that’s something that gets blown out of proportion a lot, in kind of the zeitgeist of things, as you know, people get hyperbolic around being very concerned around Terminator type of outcome versus whatever, but, from your point of view, when we start to look at this, topic of safety, what are the [00:15:00] real impactful, like pressing safety issues that you see from your point of view, with the AI space that organizations could be concerned about.

[00:15:09] Lambert: You know, typically when, when it comes to the level of, of a boardroom, people are very pragmatic and say, okay, why are we doing this? What is the imperative here? And, and I think there’s three, Three parts to it. There’s obviously regulation, and I think it’s important for organizations to look forward because this field is rapidly evolving, so don’t look at what the regulation is now, but look at what it is going to be three, four years from now.

[00:15:38] The famous quote from the ice hockey player, Wayne Gretzky. I don’t skate to where the puck is, I skate to where the puck is going to be. Regulation is one thing, risk management, beyond sort of fines that they can get, there’s also things like reputational damage, data leakage, sort of risk management is one aspect.

[00:15:56] But then I think there’s something [00:16:00] else, and I referred to that before when I talked about a cultural shift, and it is the expectation. That the public has of a company acting responsibly being trustworthy. And I think when you look at that, it is a possibility to see this as an opportunity to not see responsible AI or safe AI as a compliance issue or as a risk management issue.

[00:16:27] This is about trust, trust among your own organization with your own staff, trust with your partners and trust with your customers and the public. And if you have a clear stance on how you are going to use AI and you’re transparent about that, you can leverage that to become a quote unquote trusted company.

[00:16:48] And there is a number of companies doing that already, using that as part of their brand, as part of their marketing strategy. It means if you want to do that, if you want to seize that opportunity [00:17:00] to do responsible AI as an organization, that means responsible AI shouldn’t sit with the legal department, with the IT department, it belongs in the boardroom.

[00:17:09] It starts to become a strategic choice. So the, the strategy team, the branding and marketing people, and of course legal and technical, they, they all need to be involved. It becomes a cross cutting issue.

[00:17:23] Luke: Part of the ethos, it sounds like, you know, kind of embedding that safety responsibility into just everyday practices.

[00:17:29] Lambert: you want to be genuine about it, right? You don’t want to just look good that, Oh, we do AI good. No, you have to sit down and see, okay, what are our principles and how do they, how do Align with our company’s principles, what, what we stand for as a company and align your AI direction with that to be authentic genuine.

[00:17:50] Otherwise you’re going to be found out pretty quickly.

[00:17:53] Luke: Yeah. You know, that makes sense. There’s so much overlap here with a lot of the privacy issues too, right? And I think, you know, [00:18:00] it’s interesting because of how incentives on the business side for so long were built around just collect as much data as you can and the richer the profile, the better this, that, and to your point, if people were really thinking about how are we going to use this, you know, you can limit a lot of liability, around not having to be so egregious with a collection or applying all to everything kind of, operationally.

[00:18:20] There’s another area I wanted to kind of ask you about too. there’s been a real boom in the open source community around AI. especially since the chat UBT moment kind of came into the scene. What are you seeing around this from your vantage point? Are you, are you seeing collaboration or are you all collaborating, you know, in your work with open source community has it been from your side?

[00:18:39] Observability wise, but also like engagement wise. The boom in the open source community side around AI.

[00:18:45] Lambert: Yes, we see open source is a very positive thing. We have our own office for open source in the UN. when it comes to AI, we do use the open source models. we, like using those, especially when [00:19:00] we work in projects with other partners.

[00:19:03] If you work on an internal project, then if you have particular Agreements, contracts with vendors, then you can rely on that. But if you work with multiple partners or multiple countries, like we do sometimes or with different organizations, then you have to make sure that everybody can use it.

[00:19:20] And, and open source is a really good way to go. We talked a little bit earlier. about, you know, the equity, the global equity in access to AI. And I think open source is a very positive thing that regard. And I’ve, been amazed that AI, you know, coming out of the academic sphere, initially Has been so open.

[00:19:43] Of course, it’s changing a little bit now because both companies nations are starting to see the strategic value of AI and they’re starting to be reluctant to share everything. But so far, I think it’s, it’s fantastic how much sharing there has been in the field of AI.

[00:19:58] Luke: I’ve interviewed some, some [00:20:00] academics on here as well.

[00:20:01] And it’s really seemed to a lot of the concerns that people have, I feel if they, if they saw how invested the academic sphere is into this, it might help quell a little bit of that, but it’s great to hear that you’ve been seeing, you know, an openness there, at least for the time being, while people are kind of in this critical phase of figuring out what the tools can do.

[00:20:21] Is there anything we didn’t cover here today that you would like to talk about or kind of share with our audience?

[00:20:27] Lambert: Well, rather than diving into the technical issues, one of the things that I am concerned about is what are the longer term effects of all of this tech, including AI and Gen AI and whatever we May invent in the next couple of years.

[00:20:46] what are the effects of that in five to 10 years? And I think they are much more profound than we’re willing to admit. Everybody is focused on short term applications and okay, let’s build a rack model and let’s [00:21:00] leverage this or this AI thing in our company. If I see. What we can do with the technology that is already there.

[00:21:07] I’m not talking about future, visions or anything, but the technology that’s already there and I extrapolate that to five or 10 years, I think it’s going to affect how we communicate with one another, how we, work, but generally interact as well, how we see our own identity, things like authenticity.

[00:21:28] I think the, idea of trust will change when there is less and less content in the world that can be easily checked for veracity. Maybe the concept of truth will change. And I’ll, give you a few examples. You know, it is quite easy now to use an LLM. To make all of your emails, your text messages, and your whatever letters, more eloquent, funnier, more to the point, whatever [00:22:00] you, you want it to be.

[00:22:01] And I think at some point that’s no longer going to become optional. Because if everybody does that, if everybody writes, really eloquent emails. If everybody sends really punchy and funny text messages and you’re the only one who doesn’t, then, you know, my Instagram feed or my Twitter feed is just going to be lame, right?

[00:22:22] Nobody’s going to follow it. My friends are going to think I’m sort of bland and boring because I’m the only one. Who doesn’t spice up their text messages with, with AI. So it’s, it’s not going to be optional. But then, am I being still authentically me? What is our identity as people? If we can do nothing anymore without the AI, for fear of, seeming slow or lame or boring or, you know, inarticulate.

[00:22:47] You know, so it’s, it’s, it starts to encroach on our authenticity as, as humans. And of course, with, all types of content being so easily created these days, the concept of, of trust, [00:23:00] whether anything is, is real. Or not, whether anything is true or not, I think will also erode over the next five, 10 years.

[00:23:08] And I think people spend too little time looking at those important changes to society that we are inflicting on ourselves by our eagerness. To embrace all these new AI capabilities.

[00:23:23] Luke: Yeah, that’s a fascinating point.

[00:23:25] Lambert: Sorry to end on a pessimistic note.

[00:23:27] Luke: No, no, I don’t think it’s pessimistic at all. I think it’s actually, I think it’s, more on the optimistic side.

[00:23:32] The fact that, you know, a person like yourself is, putting these thoughts out there. I think it’s something that we need to hear more about from people. Because I think that, you know, from my vantage point. When people don’t hear about these things, that’s when you don’t know until you know. And the fact that people are concerned about these things and, especially from, from your point of view too, where you’re not as like hyper local about it, right?

[00:23:54] Where a lot of people are kind of trying to change the meanings of [00:24:00] things with an agenda behind it. You have to look at this differently, right? And so from your vantage point, so I think it’s great for people to hear that, because this is a big world that’s getting smaller at the same time, like we’ve talked about through different questions here.

[00:24:12] To that point, if people want to learn more about the work you’re doing or, or following online, like, where would you suggest that they go?

[00:24:18] Lambert: LinkedIn. My name, Lambert Hockenhout on LinkedIn or my personal website, lamberthockenhout. com. There’s also pages on the UN website where you can see the work we’re doing on the AI.

[00:24:28] Excellent.

[00:24:29] Luke: we’ll be sure to include those in the show notes too. And, so people can go check that out. And you’ve been extremely generous with your time Lambert. I really appreciate it. I love this discussion. It is refreshing to hear that, you all are mainly, you all are so hands on on the tech and it’s fascinating.

[00:24:43] Yeah. I’d love to have you back too, as these things kind of develop over time and get more of your take on, on how things are going.

[00:24:48] Lambert: I’d be happy to. It was a pleasure talking to you.

[00:24:51] Luke: Excellent. Thanks again. Thanks for listening to the brave technologist podcast to never miss an episode, make sure you hit follow in your podcast app.

[00:24:59] If you haven’t already [00:25:00] made the switch to the brave browser, you can download it for free today at brave. com and start using brave search, which enables you to search the web privately brave. Also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • Organizational readiness—how organizations can ensure that their AI systems are designed and implemented with ethical considerations using safety-by-design principles
  • Building trust amongst your team and community through transparency and responsible AI
  • Maintaining our authentic human identities even as adoption of AI goes mainstream

Guest List

The amazing cast and crew:

  • Lambert Hogenhout - Chief of Data

    Lambert Hogenhout is Chief of Data and AI at the United Nations Secretariat. He is also an author, keynote speaker, and advisor on AI and the responsible use of technology. He has 25 years of experience working both in the private sector and with international organizations such as the World Bank and the United Nations. He leads governance and strategy in the areas of data and AI and oversees its practical implementation. He has published on data privacy, data governance, the societal implications of technology, and responsible use of AI.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.