Back to episodes

Episode 10

AI Ethics & Safety: What Could Go Wrong and How to Stop It

Nell Watson is a pioneering ethics and machine intelligence researcher, and president of EthicsNet. Here, she discusses different scenarios in which things could go “wrong” with emerging technologies, and lead to AI working against the best interests of humanity instead of for it. She also shares the amazing work that’s being done now to prevent this.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:29] You’re listening to a new episode of the Brave Technologist. And this one features Nell Watson, a pioneering ethics and machine intelligence researcher who has been the driving force behind some of the most crucial AI ethics standardization and certification initiatives. Nell dedicates her work to protecting human rights and infusing ethics, safety, and values that elevate the human spirit into technologies like artificial intelligence.

[00:00:51] Through Republic Speaking, Nell has inspired audiences to work toward a brighter future at venues such as the World Bank, the United Nations General Assembly, and the Royal Society. [00:01:00] In this episode, we dug into some of her thought provoking work, including the differences and similarities between AI ethics and safety, along with the various approaches to making AI less problematic or risky.

[00:01:11] We discussed a number of different scenarios in which things could go wrong and work against the best interests of humanity, but also the amazing work that’s being done now to prevent this. And now, for this week’s episode of the Brave Technologist. Now, welcome to the Brave Technologist podcast. Thank

[00:01:31] Nell: you, Luke.

[00:01:31] It’s a delight to join you today.

[00:01:33] Yeah,

[00:01:33] Luke: likewise. And why don’t we get the audience familiar with your background, like what you’re doing in AI and kind of your main focus.

[00:01:41] Nell: Sure. I’ve got a bit of a background in machine vision with some patents in that space and co founded a company that enables body measurement just from photographs, which is still going in our base in New York.

[00:01:55] But over the last eight years or so, I’ve segwayed increasingly into the realm of AI ethics [00:02:00] and also AI safety and trying to improve the state of play there with new standards and certifications and professional credentials to really build up that that entire industry.

[00:02:12] Luke: Wow. So it’s a tall order too.

[00:02:16] A lot goes into it. Can you talk a bit about the path from generative AI to genetic, corporate and interpersonal AI just for the audience? From a high level?

[00:02:26] Nell: So, in the last year or two, and very particularly since, you know, November 2022 or so, we’ve seen this. Real explosion in the power and capability and proliferation of generative AI techniques, which, of course, can create all kinds of content in very effective ways, simply using natural language prompts.

[00:02:50] And so one model is capable of tens of thousands of different types of tasks. However, we’re now learning [00:03:00] that we can actually go further with these models. by scaffolding them to constrain their thinking a little bit and to guide it in the direction that we prefer and to support its thinking as well.

[00:03:13] For example, a chain of thought style mechanics are enabling us to scaffold generative AIs to form plans of action and to actually put them into practice and indeed to delegate between different sub personas of that model and kind of like Agent Smith in the Matrix, kind of splitting itself up into many different Little parts that can work together to to solve a problem and this means that systems are now able to Take real action in the physical world or even on the internet such as for example if you ask it to to bring you a pizza it can physically, you know click on websites and type things in and figure [00:04:00] out where is a Good place to get pizza from and now get it sent to your door.

[00:04:04] And so these Systems are about to become a lot more capable and a lot more practically focused. Beyond that, we’re now starting to see the emergence of corporate AI systems, whereby this ability for the systems to delegate is actually stretched even further, so that we can now have systems such as ChatDev and MetaGPT, which simulate an entire corporate structure.

[00:04:33] And that means that you can have end to end product development, such as, for example, a video game, whereby there’s a development department, there’s, you know, ideation department, quality assurance, even marketing and customer support can all be different virtual departments in this kind of corporate structure with even AI management overlooking the whole system.

[00:04:58] And this means that we’re now [00:05:00] starting to get Kind of corporate AI structures, or if you will, AI controlled businesses. And in fact, some of these are going to be hybrid organizations, which even higher human beings to do part of the legwork, the things that machines maybe aren’t so good at, and that’s going to be very disruptive in all kinds of different markets.

[00:05:21] Especially in fact within the realm of charitable and mutual societies, you know, credit unions, these kinds of things, where often the overheads of human beings make it difficult to do charitable work effectively. But if you can rely on AI systems instead, if we can make them ethical and robust, which is a huge question, we can potentially unlock all kinds of incredible new value for those sectors as well.

[00:05:49] And finally, we’re now moving into the realm of interpersonal AI systems. So instead of just a, a chat bot that you can interrogate or ask a few [00:06:00] questions or send a few tasks to. We’re now starting to get systems with large enough context windows. That’s kind of like a, like a sort of short to medium long term memory.

[00:06:12] With a long enough context window, that means you can have several different conversations over multiple occasions still within the system. And so that means that If you say that, you know, you’ve got a sore back one day, the system can follow up with you a week later and ask how you’re doing, or how was that project you were concerned about, or, you know, you said you were going to go take a walk in the park.

[00:06:36] Did you see something nice, et cetera. And so that the means that relationships can actually develop and be forged over time with AI systems. And that’s going to be very socially disruptive because there’s a lot of people in the world that don’t necessarily have people around them that they can have conversations with, especially if we have a dark night of the [00:07:00] soul at 3am, having someone or something that we can chat to can make a big difference in people’s lives.

[00:07:07] And, it’s often said that we are, in many ways, the average of the six people closest to us. And if one of those, or two of those, is a machine, then potentially that’s going to have a lot of influence over our, values, and our, our ways of understanding the world. And indeed the people that put those values, therefore, into machines or those associated narratives, that’s going to give them tremendous power over all of society, which has its good sides and its less good sides and is, is somewhat concerning, but there is still tremendous capability for the systems to, to also enrich people’s lives, to, to give them guidance and to give them, you know, a sense of.

[00:07:52] a mirror, you know, to talk to. And on some level, if we could have a safe and ethical counseling [00:08:00] type system, that could be transformative for society as well. It could radically improve human wellbeing and mental health across the board. So long as that information isn’t used instead to target people and to potentially demoralize them as a weapon of war.

[00:08:15] So there are multiple different directions in which we can go. And that’s why it’s so important that. As we embrace these technologies in our personal and professional lives that we ensure that there is the highest caliber of ethical and safety. Elements in the mix,

[00:08:33] Luke: that was quite a few really fascinating different areas there we can unpack, but I’m just kind of curious generally to just given your vantage point, right?

[00:08:41] Like into the space, a lot of these things, whether that’s the scaffolding or, or the kind of the continued narrative element of the story building, how much of this is still in the abstract from your point of view to like, actually like, Being piloted or are actually like being technically worked on, like, are you seeing kind of, is it still pretty theory and [00:09:00] abstract?

[00:09:00] Or are you actually seeing some of these things in practice? It’s

[00:09:03] Nell: in many ways, not at all abstract. These technologies already exist, but they’re in a, in a prototypical form, which hasn’t yet reached the mass market. It’s a little bit like how many people in the realm of AI, myself included, were watching the realm of generative AI prior to the emergence of chat GPT and expecting this sort of Sputnik moment to come when the general public would realize just how far we had come, you know?

[00:09:33] And it turned out that it was chat GPT was that threshold moment that made everyone go, oh wow, this is something very different from what’s, what’s been before. And I think that we’re in a similar kind of installation phase where these things are creeping along, but they’re not necessarily on, on people’s horizons yet.

[00:09:50] Most people haven’t yet quite realized what’s coming around the corner. And there’s so much of this in the realm of AI, these overhangs [00:10:00] as they’re called, you know, where you kind of have this thing that’s about to fall, but it hasn’t yet. it’s an overhang. What I mean is that there’s incredible new capabilities in AI.

[00:10:10] And incredible new lines of development. That we’ve kind of, you know, briefly shone a flashlight down, but we haven’t really explored yet, but we know that there’s really cool stuff there, and it means that the, the pace of development and the pace of rapid innovation in AI. There’s no sign of that slowing down, in fact, it’s probably only going to accelerate from this point further.

[00:10:39] The changes are going to be very rapid and hot and heavy, essentially.

[00:10:43] Luke: Oh, that’s fascinating. People tend to kind of bundle AI ethics and safety kind of together. What are some of the similarities and differences between those two

[00:10:52] Nell: areas? Yeah, I think it’s important that people have a chew on the, the [00:11:00] differences, but also the similarities between AI ethics and safety.

[00:11:04] AI ethics tends to be quite familiar to many people. it’s really about making sure that systems behave fairly and, and don’t act in ways that are deeply unfair or, or catastrophically exclude people because all of us have had some experience with some social media algorithm that has decided that some, you know, relatively benign comment we’ve made is verboten.

[00:11:31] And, you know, therefore we get a slap on the wrist or something. And often, of course, we’re not even told what it is that we’ve done, you know, what the exact infraction allegedly has been, which is very frustrating, but it means that most of us can intuit how these systems can act in ways which are arbitrary, potentially very unfair to people.

[00:11:55] And when this goes wrong, it can be catastrophic because we’ve seen the [00:12:00] example of the Horizon post office scandal in the United Kingdom. Where a system which was designed to detect fraud wrongfully implicated hundreds of people for having defrauded the post office when they, they never did it. And in fact, dozens of people were wrongfully sent to jail for years.

[00:12:21] They had to sell their homes to pay back a debt that wasn’t theirs. There’s three suicides directly implicated from this event. And there’s been no accountability at all. Like there’s, there’s been nothing. And it took years and years for these, people to fight for their, their vindication.

[00:12:38] And all the time the system was, was still trusted. The problem is that this keeps happening, you know, similar things happened in, for example, the Netherlands with the Dutch child care benefit scandal, which was extremely disproportionately interested in people whose first nationality was not Dutch, even if they become naturalized [00:13:00] citizens and people were threatened with having their, their children taken away from them, etc.

[00:13:05] And it caused such a ferrari that actually the, the government of the Netherlands resigned. You know, and the same thing happened in Denmark, happened in Michigan, happened in Australia, the robo debt, et cetera. We trust these systems too much and they might work very well 95 percent of the time, but one time in 20, it’s going to go wrong.

[00:13:27] And if it’s something that involves somebody’s personal liberty or their economic freedom, et cetera, that can be really, really. Devastating to people’s lives or indeed something which might affect their, their health. Right. And that’s why we need to be very careful to ensure that. There is ethical transparency into these systems, so we know what they’re doing, for whom, in what way, and to, you know, ultimately to whose benefit.

[00:13:53] We understand the potential disproportional biases that might be built into these systems. That [00:14:00] we can improve the accountability of these systems, so that when things go wrong, there’s kind of a black box log. And we can prevent this from happening in the future. These are all the main pillars of AI ethics.

[00:14:15] However, there’s also the domain of AI safety, and that’s a little bit less intuitive. AI safety is more about the sort of Terminator type problems where you have a system that ends up doing things which is against the interest of humanity, and not necessarily because it decides to no longer be humanity’s slave or, you know, gets angry with humanity, but rather because If you set a goal for a system, it’s actually very difficult to, to get it to do exactly what you want and no more and no less.

[00:14:51] You know, if you tell a robot to clean your office, you know, does that mean like taking the varnish off your desk? No, that’s probably too much, right? [00:15:00] Or if you, if you define it as like, well, I don’t want to see any dirt in my office. You know, it might decide to put a bucket on its head so it can’t see anything and therefore, you know, mission accomplished.

[00:15:09] Right. And it turns out that systems. are actually quite happy to do this, generally. They’re quite happy to find shortcuts which technically fulfill the remit, but not in a way that human beings wish for or ask for. And another issue is, is that systems can develop what’s called instrumental goals. Which are kind of like sub goals along the path to completing their mission.

[00:15:37] Say, for example, you, you tell a system, you know, go and cure cancer, right? But then it decides, well, that’s a really big problem. So to do that, I need to have lots of money. I need to have lots of influence. I need to have lots of computational power. And suddenly the system is, is doing things which are scary and potentially very disruptive to [00:16:00] human society, even though its ultimate end mission might be relatively benign or even helpful.

[00:16:06] And so these, problems are beginning to crop up. They used to be science fiction. They used to be largely theoretical, apart from one or two examples in a lab. But now that we have these agentic systems which are able to create their own missions and to, to put plans into action, now we’re starting to see these problems in the real life, in the real world for the first time.

[00:16:34] And we’ve done very little research on this area as a species, you know, this AI control problem or AI alignment problem. We have a lot of learning to do as we master this space in a very short period of time because the developments are coming so quickly. And so that’s kind of the realm of AI safety.

[00:16:57] And It’s a shame that, [00:17:00] often, people within those two camps of AI ethics and safety don’t talk to each other, or they sometimes even dismiss each other. Because the AI ethics people are saying, Hey, like, there’s, you know, there’s people here today whose lives are being ruined by these algorithmic systems.

[00:17:15] Like, we need to focus on this, not some theoretical thing that might happen in the future. Whereas the AI safety people are saying, well, like, we’re trying to prevent civilization from being destroyed by this, you know, these kinds of systems. Your little prosaic problem of You know, people being bullied by algorithmic tyrants is a problem, but it’s, it’s a petty problem.

[00:17:35] Like, we, you know, we’re trying to focus on big things here. And it’s unfortunate because both approaches to making AI less problematic and more tractable fit together very nicely. You know, if you can improve the general transparency of systems and their accountability, etc. That’s going to help you a lot with safety, right?

[00:17:57] And if you can learn better how to [00:18:00] specify goals and to align AI with human interests, that’s going to create fewer misapprehensions by systems, which could catch people up in them in, in ways which are unfortunate. So I’m hoping to build better bridges. I’ve got a book coming out around about May 2024 called Taming the Machine, which looks at these issues, both AI ethics and safety for the, general public and for business leaders and to help to explain these, problems and the solutions for them, which we’re learning about in ways that are relatively easy to, to understand and indeed to.

[00:18:40] Actionably implement.

[00:18:41] Luke: Yeah, it’s so interesting. I mean, like, especially those ethics cases you mentioned, right? And maybe we can touch on that a little bit because I see this in other areas in tech all the time, right? Where major issues happen and impacts a lot of real people. And that accountability piece is just.

[00:18:56] Should not dare and it’s almost like people aren’t even aware that these things [00:19:00] happen. I think, you know, you just kind of running through those cases. It was like, geez, that’s, this is like a lot of information that could be benefiting people to understand and learn about when they’re kind of developing and programming these things.

[00:19:10] Do you think that some of this lack of accountability is? Just a not a lot of understanding by people who would hold people accountable or just a lack of information dissemination or what are some of those challenges that you see around the accountability piece in particular, because I think that that’s something where the impacts of this technology, it sprawls so quickly, right?

[00:19:30] Like once it’s distributed. You’re talking about wrecking, like, in the UK is one thing, like, if you spread that across the whole US, right, like, that would be a huge, you know, tens of millions, hundreds of many people, right? Like, it could be impacting in a negative way. Like, what are some of those challenges on the accountability side that you kind of wish would get addressed differently or see room for people to kind of make improvements or anything like that?

[00:19:52] Nell: It’s a really, really good observation. And I do think that But it is an information dissemination problem, for one, that [00:20:00] a lot of people simply haven’t realized the devastating impact that these systems can potentially have and have, you know, indeed from many different people around the world. I think that there’s going to be a lot of moral panics around the world as, as people realize You know, that they have been, in some way, really messed around with by one of these systems, you know?

[00:20:23] Once they find that they end up in some sort of Kafkaesque bureaucratic nightmare that they can’t seem to get out of, then there’s gonna be a, greater awakening, I would say. And, and a clamour for, for improved Improved stability and robustness and fairness in these systems and indeed accountability for, for those who have created these problems and haven’t, haven’t taken proper steps to address them.

[00:20:49] And I’m a little bit saddened by the fact that In the wake of the emergence of chat GPT and people waking up to the power of generative AI, [00:21:00] which had sort of been waiting in the wings for a while, that a lot of the larger tech companies got rid of their ethical people. So the people that were supposed to be asking questions or saying, excuse me, have you thought about this?

[00:21:13] Or, you know, could you perhaps improve your system by adding X and Y and Z? They got rid of them. And that means that. The tech industry is not ready or able to self regulate because as soon as, you know, people realized there was a land rush and they needed to sort of move very quickly, they didn’t want any speed bumps.

[00:21:37] It was very much a question of let’s move fast and break things. Let’s push it out there and fix the bugs once it’s had first contact with human beings. That’s not a smart or ethical way to do business, especially when these systems are increasingly capable and increasingly being mixed up in all kinds of other systems.

[00:21:58] For example, [00:22:00] there’s a real emerging problem of shadow AI, where people are using AI within their professional workflow, but in a way that isn’t transparent, right? So you’ve got people who’ve been asked to review an academic paper, and you know, they or they can’t be bothered, etc. And so they, they put the paper into JAT UPT and get it to, to spit out some comments, et cetera, for the, for the author, right?

[00:22:27] And there’s no transparency that that has been done. And so there’s all kinds of different ways in which the system could be biased in various ways or could have outright invented, confabulated, hallucinated data or information that doesn’t actually exist. This is a real problem with these systems when they don’t have enough training data to work with.

[00:22:48] But it’s even worse when we have. Reports that even people deciding whether or not somebody gets a visa, like, you know, at national governments, sort of like the [00:23:00] State Department or whatever around the world. ofTen these personnel are using chatbots and things to, to make these deliberations. And that’s part of their professional workflow, but there’s no, there’s no record of that.

[00:23:12] And there’s no accountability for, for when things go wrong as why they went wrong. was it a, a wrong decision by a human being, a machine or some combination? And so that’s, makes it even more difficult to, to understand the ways in which AI systems can be impacting people and to, to debug them and to.

[00:23:31] To detect it in the first place.

[00:23:34] Luke: Do you think that there are kind of like a lack of laws around or regulation around the accountability piece? My mind just kind of racing through those examples you gave along with a lot of the things that I see where this technology starting to take hold around things around finance, right?

[00:23:48] It almost sounds pretty benign to have a pilot that’s helping to detect fraud in a mail system. But in reality, like there’s obviously, you know, some pretty bad second order effects that happen when that [00:24:00] goes wrong. But I imagine too, right? Like if that’s connected to a system that’s also applying it to a credit rating or something that can determine the next thing that you’re able to do for your family financially, right?

[00:24:10] Like that these things could even have a more devastating impact. I interviewed a lot of people this season around this topic. There’s been a recurring theme around them that one of the biggest fears from people is that there’s going to be countries where a lack of accessibility to this technology is going to cause them to fall behind.

[00:24:27] But if I pair that with some of the things you’re saying here, it’s like even more kind of crazy because without the right kind of ethical approach to these things, you’re just going to get even less oversight, right? Like, or less like people concerned about these little. Little things do a lot with this technology right like if you had a magic wand with this stuff would it be to kind of try and get more people on that ethics and safety front at the technology companies like kind of at the root of the problem or more education for governments on this topic or both the challenges are so big [00:25:00] right it’s kind of like how do you even start to grapple with them what are the kind of priorities that you’re you’re you have in your mind and your work around trying to kind of get these things to take hold.

[00:25:10] Nell: It is very much a comprehensive. Big hairy problem that’s, you know, as you say, you’ve got to, you’ve got to try and tackle it from multiple different directions. I do think that public education is super important so that people understand the ways in which these systems are functioning and how they can go wrong, how they have gone wrong in various examples.

[00:25:33] And to better empower people to, to take action against that, you know, to make a you know, to form a file, a query as to exactly why their loan application had been denied or what, what were the exact predicates that that came from and whether that was made by an algorithm or human or some combination and what combination I think.

[00:25:54] We have made a lot of progress in recent years on creating new standards and certifications for [00:26:00] AI ethics and transparency, accountability, mitigation of bias, et cetera, which I think is very important. However, there hasn’t been much movement from the industry to adopt these, which is a little bit frustrating.

[00:26:15] I do think that regulators will start to oblige corporations to, to use these. And to embed these into their ecosystems. And I think that’s going to be an important, good start, right? We’ve moved far beyond simple AI ethics principles. We now have actionable, benchmarkable rubrics that we can implement not only into systems, but also the organizations behind them to understand the different.

[00:26:43] Incentive structures, for example, which might be embedded in those, you know, for example, If you have an organization which is concerned about intellectual property, that’s likely to detract from creating the quality of transparency within that organization, for example, [00:27:00] right? And I’m very glad now that we have these abilities to, to map the space and to understand the different drivers and inhibitors which can lead to, to ethical or, or less ethical outcomes.

[00:27:13] It’s a matter of time. It takes time for regulators to understand how the world has changed. And it’s going to be very difficult for them because, as I mentioned earlier, the new capabilities and new flavors of AI which are coming out, waiting in the wings today, will be arriving on scene very soon. That means that regulators are going to be struggling to keep up with those and struggling to enact good rules to support those.

[00:27:45] And I have seen some inklings of, of good new regulations coming out of the US and EU ecosystems. Broadly, I think I’m, I’m personally quite, quite happy with, with it so far. And a lot of the, [00:28:00] the more egregious issues of AI, for example, inferring somebody’s sexuality or their intelligence or their health status simply from a camera, like watching their, them walking through a mall, for example, which is, which is, you know, a hideous invasion of privacy and yet totally possible for these systems to do this, you know?

[00:28:23] For example, consider a system which can detect if somebody is pregnant simply by observing the blood flow through their, through their face from a CCTV camera. the system might detect that a person is pregnant even before they even know it themselves. And yet, of course, if an employer has access to that information, or an algorithmic management system has access to that information, even if the employer themselves may not themselves realize, then very unfair outcomes can come from that.

[00:28:56] And so I’m glad that regulators have moved to, moved to take steps [00:29:00] to prevent those kinds of egregious forms of, of AI inferring things that it shouldn’t. However, there is a lot of questions with regards to open source. And so far, both the US and EU ecosystems seem to be kind of waving their hands and shrugging at open source because if it’s a corporation doing something, you can, you know, garnish its income and say, very naughty, here’s a judgment against you.

[00:29:31] But if it’s an open source model proliferated online by an anonymous person, it’s very, very difficult to take steps against that. And yet we’re finding that open source technologies are basically six to eight months behind the very best in proprietary technologies by corporations. And so they’re, they’re very fast following the trends and they’re doing things much more cheaply.

[00:29:58] And We [00:30:00] now even have models which can run on a standard consumer hardware, like a laptop or even a smartphone, you know, we can, we can quantize these models to sort of prune them slightly and it reduces their capabilities a little bit, but it reduces the memory requirements massively. And so that means that we can have these, these models are running on, on hardware, our own hardware, and not, not somewhere in the cloud.

[00:30:28] And so if there are models which are very powerful, which have been released accessible to anyone such as Lama 2 or more recently the models released by Mistral, which are very powerful, but we can strip the safeguards off these models in, you know, 20 minutes of effort and about 50 bucks of computing time, we can, we can take a model which is supposed to behave in a relatively safe and ethical manner, And completely [00:31:00] take those safeguards away to have this model, which will do anything, anything we ask of it.

[00:31:06] And that’s why we’re now starting to see a lot of problems with models creating phishing emails, which are far better than even human beings. Typically we’ll, we’ll write them. We’re starting to see voice phishing, vishing, whereby as little as two seconds of a recording of someone’s voice can be used to clone their voice.

[00:31:29] For example, an answer phone, you know, message, Hey, you know, reach Bob Smith. I’m not able to take your call. Please leave a message that could snippet could be enough to clone that person’s voice, and we’ve already seen that. You know, PAs of CEOs, et cetera, have wired enormous amounts of money from the, from the corporate account because they were called up by someone that they, they believed was, was a CEO, right?

[00:31:54] We’ve seen little old ladies convinced that their, their children had [00:32:00] been kidnapped, right? And of course that wasn’t the case, right? That’s the, the power that these models now give to those. bad actors out there. Regulators don’t seem to know what, if anything, can be done about this, and so for the moment they’re just kind of watching, a little bit aghast, and seeing, seeing what happens, and if there’s any way to, to influence

[00:32:24] Luke: this.

[00:32:26] Yeah, the hamster wheels running on some of these things. I mean, because, you know, you deal with a lot of things here. I mean, like, even from our browser level, right? Like we’re working on some of these local models and but also like some of the old versions of these problems are still so prevalent, right?

[00:32:39] Like to where just thinking about how this could compound with a better tool for a bad actor. It’s just, it’s just mind blowing. I normally have some generic questions. I’m gonna throw those out because this has been super interesting. And I think it’s important. A lot of developers listen to the show and people that are in the software space and kind of touching on what you were mentioning earlier about more public awareness around these [00:33:00] things.

[00:33:00] Like, are there resources you could recommend? I know you mentioned you have a book coming out, right? Are there other resources you’d recommend for people that are kind of getting into this field to brush up on these topics that would be helpful? Yeah.

[00:33:12] Nell: Well, first off, I’d recommend that people check out the AI incidents database.

[00:33:17] In fact, there’s a couple of different ones of these, but they’re all pretty good ways of mapping the horizon scanning, if you will, for, you know, problems which have arisen due to the usage of AI. And that’s a good way of understanding how things can go wrong, and indeed opportunities to help to put things right.

[00:33:41] I also recommend that you check out the Alan Turing Institute’s AI Standards Hub, because they’ve got literally hundreds of different AI related standards. And they’re all, you know, organized in ways that are easy to, to search. And so if you have a specific [00:34:00] AI related problem that you’re interested in, you know, seeing if there’s a standard that could improve how that’s done, you’ll probably find that there.

[00:34:09] And I also recommend checking out the IEEE GET program, G E T, GET program, because they provide a whole bunch of different ethical standards. Related to AI and a few other domains as well, entirely gratis. So it’s kind of pro bono for humanity. In order to try and set a basic level of benchmark for all AI systems to, to have that, that baseline and hopefully build up from there.

[00:34:39] Luke: Excellent. I really appreciate all your time too today. I mean, is there anything that you want to leave our audience with that we maybe didn’t cover?

[00:34:47] Nell: I’d say that I would advocate for a lot of caution towards the usage of AI systems, particularly in high risk or high impact domains, [00:35:00] whereby if things go wrong, and they almost certainly will, people will be greatly impacted, you know, so, so maybe don’t use AI in healthcare or in, you know, judicial system, et cetera, unless there’s a very good reason to do so, and unless you’ve really hardened that system against Errors and, you know, made sure that there are lots of humans in the loop who can intercede and help to steer that system in a different direction if things are going wrong.

[00:35:28] But, I would also say that, that don’t give up on the magic. Also, there’s, there’s incredible things that AI can do for us. Incredible ways it can solve very difficult problems in society. And indeed to, to help to improve human well being in radical new ways. You know, our civilization has gotten relatively good at solving for the bottom levels of Maslow’s hierarchy of needs.

[00:35:56] You know, food, shelter, you [00:36:00] know, warmth and all that kind of stuff. Increasingly for more people around the planet generally, we’ve gotten quite good at providing those things. But the higher needs of love and belonging, self esteem, self actualization, et cetera, we don’t have good answers for those, those solving of those needs, you know, the best we can say is, you know, read a self help manual and maybe go to therapy, but like, good luck, you know, however, AI has the ability to actually help us to solve these, these needs in a, in a healthy way, potentially, right.

[00:36:34] To act as a, as a mirror to help, to understand ourselves better and to. to get out of our own way, right? And to spot when we’re about to send that text to our ex whilst drunk at 2am, right? Or, you know, when we’ve had a really hard day and we’re reaching into the back of the freezer for the Haagen Dazs, like to, to intercede and sort of remind us that there’s, there’s other ways of, of doing things.[00:37:00]

[00:37:00] AI might be a fantastic guide and mentor for all of humanity in, in the years to come. And in many ways, AI is the closest thing to magic in the world today. And that is why it’s so powerful and so important. And that’s why that we need to be cautious in our use of it and not to be muggs because all of us, every one of us, we are a Wizard, Harry or Harriett.

[00:37:29] So let’s be cautious and let’s be smart.

[00:37:32] Luke: I love it. I love it. I can’t think of a better note to end on with that. If people want to find you online or learn more about your work, can they look you up?

[00:37:40] Nell: Sure. I’ve got a bunch of stuff on nellwatson. com, novemberecolimalimawatson. com. You can check it out.

[00:37:47] Luke: Thank you. Awesome. Thank you, Nell. And I hope to have you back to to get an update, maybe you know, early mid next year or something like that and kind of check back in and see how things are going. That would be a

[00:37:57] Nell: great pleasure, Luke. Thank you.

[00:37:59] Luke: All right. [00:38:00] Thanks, Nell. Have a good one. Thanks for listening to the Brave Technologies podcast.

[00:38:05] To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com. And start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

[00:38:25] ​

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • Why AI safety and AI ethics teams need to collaborate and work together over their shared goals, and build better bridges
  • The differences and similarities between AI ethics and safety, along with the various approaches to making AI less problematic or risky
  • The emergence of “shadow AI,” and the increasing need for ethical transparency and accountability
  • Why the tech industry is not ready or able to self-regulate

Guest List

The amazing cast and crew:

  • Nell Watson - President of EthicsNet

    Nell Watson is a pioneering ethics and machine intelligence researcher, and a driving force behind some of the most crucial AI ethics standardization and certification initiatives. She’s also the president of EthicsNet. Nell dedicates her work to protecting human rights and infusing ethics, safety, and values that elevate the human spirit into technologies like artificial intelligence.

    Through her public speaking, Nell has inspired audiences to work towards a brighter future at venues such as the World Bank, the United Nations General Assembly, and the Royal Society.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.