Back to episodes

Episode 12

The EU’s AI Act: Creating Legislation That Will Survive Over Time

Gabriele Mazzini, Team Leader - AI Act at the European Commission, discusses the risk-based approach they took when crafting specific rules for the Artificial Intelligence Act (versus simply opposing the technology as a whole). He also discusses the complexities involved in regulating emerging technologies that evolve at a much faster pace than the legislation itself.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:29] You’re listening to a new episode of The Brave Technologist, and this one features Gabriele Mazzini, who is the architect and lead author of the proposal on the Artificial Intelligence Act by the European Commission. In this act, he focused on the legal and policy questions raised by new technology since August 2017.

[00:00:46] Before joining the European Commission, Gabriel had several positions in the private sector in New York and served in the European Parliament and the EU Court of Justice. He holds an LLM from Harvard Law School and a PhD in Italian and [00:01:00] Comparative Criminal Law. In this episode, we talked about the vision for development and regulation of AI put forward in the AI Act, what the process has been like for the development of the AI Act, including who were the key players, where regulation in this space can really be helpful, and the complexities involved.

[00:01:17] And now for this week’s episode of the brave technologist, Gabrielle, welcome to the brave technologist How are you doing today?

[00:01:28] Gabriel: Thank you, Luke. Thank you so much. It’s really, it’s really a pleasure to be with you today. Yeah.

[00:01:32] Luke: Just to get started, maybe you can give us a little idea of how you ended up kind of doing what you’re doing and if it was something you kind of had a general interest in or, or, uh, foreign student chain of events kind of got you there.

[00:01:43] Yeah,

[00:01:44] Gabriel: I think it’s kind of both. I ended up getting interested in technology when I was working in the U. S. I mean, I spent a few years working in financial development for an NGO, which was founded by Professor Sachs. So we were working in Africa and we were working with. [00:02:00] solutions that are technology based.

[00:02:02] So that really triggered my interest in how technology can really serve to improve people’s life. Then I spent also some time working with the technology startups in New York. So when I came back to Brussels, I really wanted to do policy in relation to technology. And so that’s how it was back in 2017, when the commission started reflecting on what could be sort of policy, legal implications of things around data, big data, IoT, actually, at the time, AI was not even yet, I would say the buzzword is today.

[00:02:39] And so then I was there right at the beginning, and so initially I focused on the sort of top law implications of, of IoT. So when something goes wrong, like in a robot or something, but then little by little, it became kind of a bigger discussion around what are the ethical implications of AI and stuff like that.

[00:02:59] Luke: Yeah, [00:03:00] it seems like a really interesting time to be getting involved there. I know privacy, uh, the GPR and a lot of the other regulations were kind of springing up then. Were there anything in your background that really helped you kind of get where you are or skills or anything that made it just kind of a natural fit?

[00:03:16] Gabriel: I think generally, I have to confess, I’ve been quite eclectic in my professional, let’s say, I mean, I’ve always practiced as a lawyer, worked as a lawyer, but so I started off by being interested in criminal law. Then I moved dealing with international law and European law, like the institutional setting.

[00:03:39] What I dealt with also with things around fisheries and agriculture. Then when I moved to the U. S., I dealt with essentially, I was essentially a. Attorney for a nonprofit. So I did a lot of corporate law, contract law, governance, and then with the startups also issues around technology and sort of [00:04:00] ip. So I guess this journey helped me being a little bit eclectic in the way I look at the law.

[00:04:07] And AI by its nature, is a general purpose technology, right? Right. So it’s a technology that impacts. Economies and societies across a broad range of domains. And so, sectoral legislation, all sectoral legislation, if you want, is somewhat impacted. So that is, I think, has been my added value. When I started reflecting about the implications of AI for EU law, I sort of already had a sense that this could not be just in one single sector, but it had to be holistic.

[00:04:40] And so after I joined the commission, and it was indeed, as I mentioned before, it’s focusing on the tort law applications, I spent quite a lot of time thinking about how AI impacts other areas like, you know, privacy. Fundamental rights, uh, product safety, consumer protection. And so I remember [00:05:00] spending like, um, over several months, like my weekends studying this matter and writing papers.

[00:05:07] And so that I think is what ultimately led me to being asked to, to work really on the AI act and the regulation on

[00:05:15] Luke: it. That’s helpful. I think, you know, it really great broad range of background there for tackling something like this. Which is great because you kind of need that right like you were saying is a general purpose technology and whether or not people realize how much it’s currently in use is kind of something I’m finding out more personally every day just by having different people on this podcast looking in on on the act what’s been your role in the development of the act maybe you can give the audience a little sense like from a high level what the act is about and how it’s going to impact people’s lives.

[00:05:48] Gabriel: Yeah, sure. So my role is, uh, being team leader for the AI Act European Commission. So basically I’ve been the architect of the AI Act and the lead drafter. So basically, I, I closed [00:06:00] myself at some moment, like two weeks at home, and sort of, I conceptualized the whole architecture of, of the whole AI Act.

[00:06:07] Then of course, we had proper validation within the commission, whether the concepts were sound, the ideas were correct. And then we started the drafting. And so then I coordinated the drafting. We divided drafting among myself and a couple of other colleagues. And then my role was to ensure, you know, the consistency of the whole drafting and, and so on.

[00:06:28] And then once the proposal was out, we let it go to its own institutional process. And my role has been to essentially keep following the proposal as it evolved in, you know, the whole institutional setting of you, which requires a vote by the parliament and the council. So the commission is, is kind of the instigator of the legislature.

[00:06:48] We propose it and then it’s up to the parliament and council who operates a bit like a bicameral chamber like the U. S. Congress, like Senate and House of Representatives [00:07:00] to actually decide. And our role is that of being a facilitator of that process to really help them find a solution and compromise.

[00:07:09] And so, so that’s why my role didn’t end up in once the proposal was out, but I kept together with my colleagues. Following the process and advising the co legislators on finding a deal.

[00:07:21] Luke: When you’re looking at the AI Act, this is new technology, and we see this often with regulation and technology where you can see regulation pass, and then when it comes to enforcement or even on how these things are, the process is driven, like, so much of this is new, the technology, how it works, how people interact with it, what the chain of custody and providence is with data, etc.

[00:07:44] When you’re crafting something like this, how difficult is it to balance out all of that Into regulation, is it? Are you guys kind of setting a framework here for people to follow? Or is it a process where you’re having to educate the policymakers and the [00:08:00] officials that are gonna have to administer and govern over this regulation?

[00:08:02] Like, I guess what I’m trying to kind of get a sense of is I know just from being on the technology side of this, like how fast it moves and how broad it is, right? how challenging is it to create something like the act when you’ve got all these different variables working in the process? I mean, I

[00:08:17] Gabriel: cannot deny it’s not an easy task to regulate technology.

[00:08:20] It’s absolutely the case that the technology moves very fast and the law doesn’t move as fast. And it should not. I mean, frankly, the law should not follow the same pace of technology. But the trick is to craft the law in such a way that it can stand the course of time, the passing of time. And that’s a challenge, because for matters that don’t evolve that fast, then, you know, it’s somewhat easier to write the law that is going to be stable over time.

[00:08:47] When it comes to technology. it’s much more difficult. And so I think we did our best in ensuring that the act would include certain elements of flexibility [00:09:00] that will not require, you know, an adaptation of the law in a short amount of time. But certainly there are limitations to that as well. So because, of course, the law on the one hand, the law takes time to process.

[00:09:12] So, as I mentioned, you know, we did the proposal into 2021. Then we sent it to parliament and council. They took their own position on the proposal, the council in, in December, 2022, and the parliament in June, 2023, and then they had to negotiate among themselves. And now we are sort of in the final phases.

[00:09:36] We have a draft provisional agreement that needs to be ratified. So you see at least three years have passed. And guess what? During these years, we had charge PT, which made things much more complicated. I mean, also politically and the awareness by the public in what is AI and what are the risks of AI and, you know, how should we deal with it?

[00:09:57] That certainly shaped also [00:10:00] the process of the AI Act, which initially was not meant, say, to cover this type of technology, at least I should say, to regulate them directly. There’s been even some criticism at some point to the commission that. You know, in our proposal, we had not foreseen generative AI per se, as in the scope, and that is perhaps not appropriate, I would say.

[00:10:22] So conceptually, for instance, our definition of AI, the definition we proposed. included also generative AI, but we didn’t foresee specific rules. And that’s, I think it’s a question. So do we want certain specific rules for this type of technology, like generative AI, as opposed to how this technology is used?

[00:10:42] For instance, you know, you may know the AI act is very much focused on risks. So we, we talk about risk based approach. Which means we are regulating not the technology per se, but the use of the technology. So depending on the risk that the certain user may create, let’s say a [00:11:00] certain type of social scoring we think is bad.

[00:11:03] So we want to ban that. Then there are situations where the use may create some risk that we consider high. And in that case, we subject the systems to certain type of strict requirements that needs to be verified ex ante. So meaning before the system is placed in the market, and then we have a third level of risk where we call it like low risk or transparency related risk.

[00:11:29] where the risk is in fact more related to the fact that there’s a lack of disclosure about the existence of the system or the product of the system. And again, we go back here to the generative AI, where we have proposed that, for instance, deepfakes should be labeled, or I should be aware if I’m exposed to certain type of systems, and I’m not sure whether I’m interacting with an AI or not an AI.

[00:11:53] So this is just to give you a sense that risk based approach and why we wanted to introduce rules specific to the type [00:12:00] of uses as opposed to the technology per se.

[00:12:04] Luke: It makes sense, especially if you’re what you’re talking about earlier, where you’re trying to make a piece of legislation or regulation that will survive over time, right?

[00:12:12] If it gets too in the weeds with one thing over another, it’s going to be, you know, out of date, right? In short order. One question I have too around the process, like, When you’re crafting the act, are you engaged with the technology companies as well? Are they providing input in that review process at all?

[00:12:29] Or maybe folks could get a kind of an understanding of how involved they are or aren’t in that process.

[00:12:35] Gabriel: Yeah, totally. I mean, this is, I would say it’s probably the best part of my job to really. Engage with what we call the stakeholders and these are, you know, because the act is, is a horizontal piece of legislation.

[00:12:47] So it’s not just, let’s say, oh, I’m regulating AI for financial services or I’m regulating AI for products. It’s really horizontal in that it impacts several sectors. So we really had the chance to engage with [00:13:00] stakeholders from all areas and domains, which I think was very enriching and needed. But this is generally, I would say, part of.

[00:13:07] the process that the commission follows in any case. And so every time we, we submit a proposal, first of all, we do an impact assessment. Meaning, so we assess the impacts of the potential regulation in a certain area, and this already gives stakeholders the possibility to engage with us, to submit opinions and papers.

[00:13:27] We do public consultations. Then even after the proposal is out, the consultations keep going. So we received so much statements and position papers by all sorts of stakeholders. As I said, industry. I mean, definitely industry needs to be listened to, or at least we need to engage with industry because they are the ones who develop those systems, but at the same time, also civil society, for instance, or public administrations.

[00:13:52] They also have a stake in different respects. So I can definitely confirm that we spent a lot of time [00:14:00] talking to all of these stakeholders and ultimately it’s our responsibility when we decide to put down a proposal to draw a line and to find a balancing between all these interests. But we definitely engaged with all of them.

[00:14:12] And of course, Even during the process, once the proposal is out, this process doesn’t stop, because stakeholders, you know, of course, they talk to the council, so they talk to the member states, and then they talk to the parliamentarians in the European Parliament. So this is a continuous process.

[00:14:28] Luke: Yeah, it’s one of those things, too, is a great lead in to the next question I had.

[00:14:32] I mean, these things, aside from the technology kind of being complicated and used globally, they’re adopted globally, too, right? When you guys set out to make this act, is there any conferring with any of the other regional stakeholders that are making regulation elsewhere? It’s one of the things I’m trying to wrap my head around.

[00:14:50] And we saw this with privacy quite a bit when GDPR rolled out and a lot of publishers, at least initially thought, well, it’s just going to apply in Europe. And then it turns out, well, actually, if you’re doing business [00:15:00] into Europe from outside, it does apply to, so, so people were kind of scrambling. I mean, I wonder a bit.

[00:15:05] Are the table stakes here around the act flexible enough to survive like a global adoption of this thing or be used by other countries or zones that are trying to regulate this or have you guys talk to anybody that might be doing

[00:15:19] Gabriel: it. Yeah, definitely. I mean, of course, when we started this process, we wanted to regulate.

[00:15:25] Our jurisdiction, so our place, the EU market, right? And of course, we wanted to do this in light with our values. So the reason why, you know, we do it in Europe, because we have, let’s say, certain, certain approaches, how to balance, you know, let’s say, privacy versus other interests or fundamental rights versus, you know, other innovation.

[00:15:45] And so we certainly focused on what is this right balance for us. At the same time, of course, we are aware that there is a common interest in ensuring that we have a global conversions versus, you know, in the direction of certain [00:16:00] principles towards certain principles and approaches. We engaged with our partners since the beginning in many respects, bilaterally, but also regionally.

[00:16:10] Like, for instance, in the context of the OECD, where actually the definition of AI that is in the AI Act. It’s pretty much aligned with the definition that was put forward by the OECD. So we had to take some adjustments because the OECD is not a regulatory body like, like the Commission, but they came up with a definition which found a consensus among many countries beyond the EU, including the US and, you know, Canada and so on, and Japan.

[00:16:39] And so it was important definitely for us to anchor. The I act to a definition on which everyone could agree on. And this is for instance, a sign of how we, we really took this as a, as a more holistic way of looking at the regulation. And then, yes, indeed, we, we’ve been discussing bilateral with many countries have been [00:17:00] asked to do presentations for many countries who were interested in.

[00:17:04] or thinking themselves about, you know, how do we deal with AI. Some of them have advanced faster. For instance, Brazil have been quite advanced in, in also doing AI regulation. Some less, because perhaps they want to first see how this effort pans out in the EU and so on. But I don’t, I haven’t had the feeling that while taking care, let’s say, of our own constituency, because that’s been my, my first job.

[00:17:32] We really wanted to make sure that this is a blueprint that can work for everybody as well.

[00:17:38] Luke: Yeah, no, that’s great. I mean, I know it’s certainly like on the privacy side too, it’s been helpful, like Europe kind of set a standard of like, okay, this is what personal data means, which, uh, as somebody who works on privacy tooling, um, has been extremely helpful, right?

[00:17:52] Because there’s nothing more tricky than having a lot of uncertainty around these things. things, especially when you’re building products. One of those things people fear a [00:18:00] lot in the space. And we were working with startups and developers and folks like that. And, you know, people tend to get hyperbolic whenever regulation itself is brought up.

[00:18:08] When you’re working with these different stakeholders from the industry side of this. Are they extremely concerned around the fact that regulation is being worked on or coming or is everybody just kind of accept that this is something that’s going to be regulated? Let’s get involved early type of thing.

[00:18:24] I think one thing would be helpful. I think for people to understand here is just how regulation doesn’t necessarily mean a bad thing. It actually be a good thing for business when you have those rules that you you. You know, you can work with it, but I’m just kind of curious if there has been fear from people in the industry or if everybody’s just kind of going along with the process and in good faith and hope that we get something good on the table.

[00:18:47] Yeah,

[00:18:47] Gabriel: I think everyone is engaged, of course, at different levels. And some perhaps have been more, more skeptical about regulation or let’s say. a more extensive regulation. Someone has been more skeptical, for [00:19:00] instance, about the horizontal approach versus, for instance, the sectoral approach. But I think among stakeholders, there is, my impression has been that there’s always been an understanding that this will be regulated at some moment.

[00:19:14] I mean, we were more advanced than others, but now we see indeed signs that even in other countries, including the US, there are movements towards taking some action in respect of AI, like like the recent, uh, except the order demonstrates. So I think stakeholders generally have understood that. And so the question is more be rather than regulation.

[00:19:35] Yes or no is more what type of regulation? So and I think indeed sometimes People tend a bit to say, oh, regulation is bad, and non regulation is good. So there’s a bit this dichotomy between regulation, yes, regulation, no. I’ve always found this a bit too narrow minded, too narrow, because it depends the regulation.

[00:19:56] So as you said, I think regulation has benefits in the sense [00:20:00] that, for instance, by setting rules that are equal for everybody, I think you know what you’re supposed to be doing. So, so there are benefits. And for instance, in the EU, there is a clear benefit that by Setting the same standards across the you will not have some national national manufacturers or national developers want to market their product.

[00:20:22] In another country in the EU, we’ll have common standards, so we’re not going to face, let’s say, certain rules in Germany, different rules in France, different rules in Italy, and so on. So having a harmonized approach in the EU also serves, certainly, this purpose to facilitating cross border use and circulation of, of AI systems and products.

[00:20:43] But certainly there is a question that those rules need to be up to the purpose that they are supposed to meet. And that’s, I think the challenging question. And of course it’s, it’s not that easy and people differ whether certain things should be regulated in a certain way or not. But generally [00:21:00] back to the question, I feel that, you know, everyone was somewhat on board that some regulation was

[00:21:05] Luke: needed.

[00:21:06] That’s interesting. You know, there’s been a quite a big open source movement around AI too. How much has that impacted crafting the act or was it something you all were watching as you were working on this? Just how the how much this is proliferated across the open source community when you’re making the act?

[00:21:23] Gabriel: That’s a really very interesting question. And I think one of the complex questions that have emerged during the negotiations. So the iArt as a start, we wanted to focus on AI systems, right? Not much on specific pieces. Or components, let’s say the models or the data, but we wanted to focus on the final product.

[00:21:45] And as I said, depending on what was intended purpose of the product, so then it would fall under certain categories of risk. So to the extent that the regulatory framework remained focused on the final product, the question about open source [00:22:00] was not really that important, right? Because I was going to, you know, if I’m a manufacturer, I can source my components from different providers.

[00:22:09] I can take them open source, non open source. But then I am ultimately responsible for final compliance. So the question about open source came during the process of negotiations because the co legislators, in particular, the European Parliament, I think in this respect, have opened the door for specific regulation about foundation.

[00:22:33] And so I think the moment you move the needle from an AI system to the model, you enter a bit exactly into this discussion how we’re going to deal. then with models that are open source versus those that are closed or gated. So in that sense, I think the discussion became more complex. In the compromise on the table, there’s been some rules that regard [00:23:00] foundation models or what we call them general purpose hand models.

[00:23:03] Notably, these are divided into two tiers. So there is a first tier of foundation model that is subject to certain documentation, transparency obligations. including as regards, for instance, copyright and those provisions apply some of those provisions also to open source foundation models, notably in particular the compliance with copyright obligations.

[00:23:26] The documentation and transparency, no, because those foundation models are sort of supposed to be transparent by themselves to the extent that they are open. But then there are a second tier of obligations for, let’s say, models with systemic risks, where all obligations apply to all models regardless of whether they are open or not open.

[00:23:48] So there are some differentiation in what is the current compromise on the table. And indeed, taking into account the fact that open models, while they may be used also in a commercial setting [00:24:00] because you may have business models around open models, but by nature, they tend to be exactly more open. So certain obligations may be less relevant for them.

[00:24:10] So there’s, there’s certainly been a considerations in that respect that some different. Treatment may

[00:24:16] Luke: be warranted. Oh, it’s fascinating. It’s really interesting hearing about the two tiers to this more, you know, systemic risk. And, and the first one, I think zooming out a little bit, you’re somebody who’d been very involved in trying to balance and get an ax through, right?

[00:24:30] Which is no, no easy task. But on a personal level, you know, having your background in technology and law and seeing these things play out, what things are you most concerned about with AI or the proliferation of it in technology that might be either like radically misunderstood or people just aren’t paying attention to that, that kind of, not, not what keeps you up at night necessarily, but what, what things are, do you think that aren’t necessarily getting addressed with the level of attention that they probably should be?[00:25:00]

[00:25:00] Sometimes

[00:25:01] Gabriel: what I’m, what worries me is, is the complexity of the issue. So the fact that because this matter is so complex, so I think AI is complex. I mean, what is AI and you know, what AI can do and how are we going to deal with the human implications of AI and in many senses, you know, how do we want to balance, let’s say, the productivity, how to say, benefits of AI versus the possible risk and our loss, let’s say, of autonomy or of agency in certain respects.

[00:25:32] So there is a complexity around AI, which I think is, however, something that affects everybody and everybody should somewhat make an effort to understand. And then there is a complexity about regulating AI, which is a second layer of complexity, which of course is informed by the choices that we’ll, you know, make with regards to the first area, where we do want to draw the line and find the balance.

[00:25:54] But regulating this technology also is quite a complex exercise for a [00:26:00] reason that You know, I mentioned at the very beginning of our conversations when we started our chat that indeed regulation is already existing in many areas, and we have to make sure that whatever we do, we do it such a way that, you know, we do it properly.

[00:26:17] So I think this complexity around. Potential benefits and potential risk of AI and how we deal with them through regulation is really something that has kept me very busy because I sort of felt a lot of responsibility in getting it right. I can imagine. And this is sometimes it really kept me up at night.

[00:26:40] And so I hope we managed to do it properly. Sure. Sure. And also, I hope we, we also be able to indeed convey the message that also AI is something that everyone should be interested in. Everyone should be able to understand. And it’s our responsibility of, as a public institution, but also in general [00:27:00] stakeholders to really make an effort to explain that AI to person so that everyone can have its own opinion.

[00:27:08] Because we are all, as you know this perfectly, we are all affected, right? We are all dealing with this technology on a day to day basis. So we should be, all of us, in a position to have an opinion about it and not just leave it to someone else to decide for us. But at the same time, we need to be informed, and I think the effort on indeed untangling this complexity Is what we should be doing?

[00:27:32] Luke: Absolutely. And I think like one of those interesting things I’ve just kind of discovered through doing this podcast, even someone who is specializing in A. I. safety and security and sometimes just the adverse effects of things that sound very benign on the surface and initiative, like trying to stamp down on, you know, certain types of fraud or whatever can have really bad follow on effects if they’re implemented poorly.

[00:27:55] So, you know, there’s definitely like, Yeah. A difficult balance. You all are in a position to [00:28:00] do where you’ve got, it seems like you’ve got industry on one side, a bunch of people that are trying to rapidly get things to market. And then, you know, the whole side of users and consumer protection and just making sure that, you know, while you’re not getting in the way necessarily that people aren’t having going to be really adversely impacted by some small detail that was overlooked in, in some study or, or some, you know, implementation or something like that.

[00:28:26] When you look down the road 5 10 years from now, what’s your outlook for the future with this, both just kind of with AI in general and with regulation?

[00:28:35] Gabriel: Generally, if I think about the next 5 10 years, and I think about the EU, I think I’m overall positive. So I think As I’ve known this technology more, also, I think, uh, when I started, maybe I am a lawyer, so I’m not an engineer, but I’ve made efforts to understand it better.

[00:28:55] When I started this work, I think I had a lot of misconceptions, and as [00:29:00] I advanced with the work, I think I understood it much better. Thanks also to the colleagues who helped me understand, we’ve been working in a commission with our joint research center, which is our research arm in the commission. So I have, I’ve been very fortunate to have colleagues, you know, engineers who have been explaining me this technology.

[00:29:19] And so as I’ve learned it more, I really see there is a lot of opportunities that can help us challenges that we’re not being able to solve so far with the tools we have. So I think. We need definitely this technology and we need to make sure that we use it properly. At the same time, there are risks.

[00:29:38] Yes, that is, that is also clear to everybody, but I’m not too concerned when, when I think about the risk technology can bring us, because I think we are well equipped now. I feel also even regardless of the IAP, which of course is a major piece of legislation that is going to impact the space, but I think indeed If I think for instance about [00:30:00] GDPR and the privacy law, this is something that it’s sometimes even underestimated in the sense of the impact that this legislation can have on the use of AI.

[00:30:10] And it doesn’t even mention AI in the GDPR. It’s a regulation that certainly will have a very important impact on how AI is going to be developed and used and how much we can protect ourselves for certain abuses. So I feel that now we are in a situation where when we’re talking about the risk, we have a quite good understanding of the risk that may emerge.

[00:30:35] And this understanding is also quite, is filtered up politically. So I’m quite confident that in the next 5 to 10 years, We are ready to have the tools that we need to tackle these challenges and, but at the same time to indeed make sure that we can use the technology for, for the things that we really need

[00:30:56] Luke: it.

[00:30:57] No, it’s great. I mean, I think it’s just so cool hearing. I mean, [00:31:00] I think people are going to appreciate just hearing just how practical you all are approaching this. And just, you know, people tend to kind of think, oh, regulation legal, but you guys are really digging in and trying the products out and really getting a kind of a working knowledge of these things.

[00:31:16] And I think people really appreciate that. And hopefully it helps them, you know, when they think about things like regulation to not necessarily think so narrowly on them. And, you know, we’re all kind of in this together, right? Um, you know, you’ve been super gracious with your time here, and I really appreciate you coming on.

[00:31:31] Um, is there anything that we didn’t touch on that you might want, you know, a broader audience to know about while we still have you?

[00:31:37] Gabriel: I guess we, you know, we, we went a bit off script, but I think you, you covered it all.

[00:31:43] Luke: Apologies to you for that, by the way, I just liked where it was going and I don’t know, it seemed it was a really interesting conversation.

[00:31:50] That’s

[00:31:50] Gabriel: great, so I think we covered it, so I don’t, I don’t have at this moment anything so particularly that I’d like to add.

[00:31:57] Luke: You know, if people are interested in learning more [00:32:00] about the AI act or if there are any other resources that you’d recommend that listeners tune into and go dig around a little bit and love to hear them.

[00:32:08] Yes,

[00:32:08] Gabriel: so definitely, I would say for everyone who’s interested to hear more about our work on the act, I mean, check the website of the European Commission. Do you find that a lot of materials related, of course, on the act and relation, but also related on what we call the ecosystem of excellence. So in the EU, we talk about ecosystem of trust and ecosystem of excellence.

[00:32:29] So system of trust is more that related indeed to, you know, the regulation because we, you know, we kind of, we started from the principle that, you know, in order to have trustworthy AI, we need to put some guardrails so that people can actually trust the technology. So regulation as a tool to ensure trust.

[00:32:47] But on the other hand, uh, you know, do you is also heavily investing in ensuring that, uh, we can actually develop AI and you through supporting research and development, testing and so on. [00:33:00] And so in that respect, there is both aspects that we are doing work on. So definitely would encourage to check that most specifically on the regulation.

[00:33:08] I mean, myself, I wrote a couple of articles on that just to explain. Let’s say, what are the basic concepts of the IAC? So for, for those of your listeners who are maybe lawyers and want to get a bit deeper on that, you can just find them online. And then I perhaps, uh, if that’s okay, but, oh, you can cut this.

[00:33:27] I’ve been registered for maybe 10 newsletters or something like that, because indeed I try to understand a little bit more, you know, also the technology side. And what I found very interesting as a, you know, really as a starting point to really dig deeper on certain topics. Uh, the materials from the MIT Technology Review, I think they’re, they’re really interesting.

[00:33:47] And also I signed up for a newsletter that is often to the point of certain topics, like for instance, You know, they discussed a lot of foundation models, the AI snake oil newsletter [00:34:00] from, uh, Kapoor and Narayanan. I think this is also quite, quite a good tool.

[00:34:05] Luke: That’s fantastic. Fantastic. That’s exactly what we’re trying to hope, um, uh, people can, can look into.

[00:34:10] If people are trying to follow you, are you on Twitter or LinkedIn or anywhere on social media? People can

[00:34:15] Gabriel: follow? But I’m, I’m LinkedIn and every now and then I post when I go somewhere. So happy to be in touch with people.

[00:34:22] Luke: Cool. Yeah. We can put that in the, uh, in the bio too. Well, I really appreciate you, you coming on Gabrielle and giving us such a interesting look into, into the process of the AI Act and some of your own background.

[00:34:32] And I’d love to have you back on again sometime in the future to check back in, um, if you’re game for it. Thank you for coming on. Really appreciate it. I’d

[00:34:39] Gabriel: love to. Thanks so much. It’s been really fun. All

[00:34:41] Luke: right. Thanks. Thanks for listening to the brave technologist podcast to never miss an episode. Make sure you hit follow in your podcast app.

[00:34:50] If you haven’t already made the switch to the brave browser, you can download it for free today at brave. com and start using brave search, which enables you to search the web privately. Brave also [00:35:00] shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • Recommendations put forward for regulating emerging technologies within the AI Act
  • What the process has been like for the development of the AI Act, including the key players
  • Where regulation in this space can be most helpful despite the complexities involved

Guest List

The amazing cast and crew:

  • Gabriele Mazzini - Team Leader - AI Act at the European Commission

    Gabriele Mazzini is the architect and lead author of the proposal on the Artificial Intelligence Act (AI Act) by the European Commission, where he has focused on the legal and policy questions raised by new technologies since August 2017. Before joining the European Commission, Gabriele held several positions in the private sector in New York and served in the European Parliament and the EU Court of Justice. He holds a LLM from Harvard Law School, a PhD in Italian and Comparative Criminal Law from the University of Pavia, and a Law Degree from the Catholic University in Milan. He is qualified to practice law in Italy and New York.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.