Back to episodes

Episode 58

The EU’s Approach to Digital Policy and Lessons Learned From The GDPR

Kai Zenner, Head of Office and Digital Policy Adviser for German MEP Axel Voss in the European Parliament, discusses the emerging regulatory landscape for artificial intelligence in Europe and its implications for innovation and consumer safety. He also discusses implementation hurdles of the EU AI Act, specifically the shortage of AI experts and the complexity of enforcement across 27 member states.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:28] You’re listening to a new episode of The Brave Technologist, and this one features Kai Zenner, who is the Head of Office and Digital Policy Advisor for MEP Axel Voss in the European Parliament, focusing on AI, privacy, and the EU’s digital transition. He is involved in negotiations on the AI Act, AI Liability Directive, ePrivacy Regulation, and the GDPR revision.

[00:00:50] A member of the OECD AI Network of Experts and the World Economic Forum’s AI Governance Alliance, Senor has also served on the UN’s high level [00:01:00] advisory body on AI. He was named Best MEP Assistant in 2023 and ranked 13th in Politico’s Power40 for his influence in the EU digital policy. In this episode we covered challenges with the AI Act, big use cases, balancing innovation with regulation, and the impact on SMEs, the horizontal approach for legislation, demand for AI experts, and challenge with the current talent pool competing in the international job market, and lessons from GDPR, and upcoming GDPR revisions being considered and how they’ll impact data privacy.

[00:01:28] And now for this week’s episode of the Brave Technologist. Kai, welcome to the brave technologist podcast. How are you doing today? Hi, thank you very much for the invitation. I’m good. Excellent. Just to kind of level set with our audience a little bit, what inspired you to focus on digital policy and AI regulation as a career move?

[00:01:52] Kai: Yeah. Well, I kind of started in this area, actually my career. So I was working at the [00:02:00] Conrad Anner Foundation um, Yeah, a party affiliated think tank and digital was one of many topics that I needed to cover. And yeah, as it’s often the case over time, I did more and more in this area also because I found it quite interesting and then I had my first focus topic on privacy and from there it went to AI and yeah, well, we started basically as one of the.

[00:02:29] First policy makers to my boss, Axel Foss, and I as an advisor that were active in this area. And yeah, since we were one of the first, it kept going and we kept reading and engaging and so on. And yeah. We still like it. We actually like it also in comparison to a lot of other policy topics that the digital field is much more optimistic.

[00:02:55] There’s much more will to compromise between [00:03:00] certain policy groups. And yeah, this is why I still enjoy being in this field.

[00:03:05] Luke: Excellent, excellent. You also describe yourself as a bit of a digital enthusiast. What excites you most about the digital transformation happening in the EU and where that overlaps with your work?

[00:03:18] Yeah,

[00:03:18] Kai: I really have this feeling it’s one of the, Few positive topics that we have at the moment politics. And also, I’m a person who likes to challenge the status quo, who likes to push for reforms yeah, who want to make the world a little bit, a better place, and I feel like. The digital sector has so many opportunities and so much potential.

[00:03:46] For example, we in Europe, we care a lot for climate change, but unfortunately so far we are not really using digital technologies, for example, to save even more energy or. [00:04:00] We didn’t really use technologies in, at least in all European countries for e governance frameworks and so on. And because of those potentials that technologies could play in those areas, for me, it’s always an exciting thing to.

[00:04:17] push to make it happen to finally overcome certain obstacles or certain actors that are pushing back. And yeah, this is kind of motivating me every day to fight again a little bit for a bigger change using digital technologies better.

[00:04:36] Luke: That’s awesome. Yeah, no, and I can imagine it’s been quite a busy time with GDPR and, you know, having a couple of years under now and now the AI space really picking up.

[00:04:46] I mean, speaking on that, I know that the AI Act is one of those things you’re, you’re putting a lot of focus on right now, just for folks that might not be in the EU that are listening now. What are the kind of key points around the AI Act that it’s focusing [00:05:00] on and your primary objectives with that work?

[00:05:02] Kai: Yeah, maybe first of all, it’s really important, in my opinion, to, to underline that the AI Act, despite certain criticisms that you are hearing or reading at the moment, Has a starting point that is very much rooted in the international community of states. So there was a lot of discussions in the 2010s about certain legal gaps or the need of policymakers to interfere because of technological advances with machine learning, deep learning technology, and so on.

[00:05:40] And back then, Countries like the U. S., like Canada Europe, but also even China or India, Singapore, Japan, they all agreed that there are certain gaps, as I said, certain challenges that we need to overcome. Then, However, the European Union, [00:06:00] from this kind of common perspective, yeah, tried to do its own thing, mainly based on the realization that in technological, yeah, points we are really lacking behind the United States, but also China.

[00:06:16] So the idea was to create a legal framework that is giving our companies. Such a high level of legal certainty and allows them to come up with very trustworthy AI technology that is again also in terms of quality, but also safety, fundamental rights protection and so on. Something like so safe that other people, other end users outside of the European Union would feel like, okay, this is something I need to buy.

[00:06:51] So kind of creating a competitive edge for the European Union. However, what we did then as Europe is we [00:07:00] created a kind of horizontal. AI framework that is based on yeah, a kind of product safety law. So we created a long list of technological requirements that AI systems no matter in what sector or for what use case they are being used need to fulfill.

[00:07:23] And this creates a little bit of a problem, and if you are now looking what the United States are doing, what Singapore are doing, and most of the other countries decided actually to go in a different direction, they were deciding against a horizontal And AI legislation, but they were, for example, going for an AI builds of rights like in the United States.

[00:07:47] And then let sectorial legislators or regulators decide how they are implementing or executing those principles in their certain [00:08:00] sector. And this is really the big difference between European Union a little bit also the Canadian either law. And most of the other countries or regions in the world, so horizontal versus sectoral legislation.

[00:08:16] Luke: Maybe we can drill down into that horizontal approach a little bit more. When these things are brought up, at least in the US, it’s very binary, right? Where people will tend to jump and say, Hey, we’re going to over regulate too early or whatever. But I have a feeling that people just don’t really understand Kind of the fundamentals around some of these frameworks.

[00:08:36] So maybe you could drill down a little bit into how this horizontal approach works and what areas are kind of, you mentioned safety, right? Like, well, maybe we can unpack that a little bit and explain what that means for the average user, if you don’t mind.

[00:08:49] Kai: Yeah, no, of course. So the AI act goes in further.

[00:08:52] I told you it’s kind of laying. Yeah, a long list of, of legal [00:09:00] obligations that developers of AI systems need to fulfill. So certain things like risk management, like they need to provide technical documentation where they are talking about the architecture of an AI technology, also even about energy consumption, a lot of those things.

[00:09:21] things, then we have also provisions that yeah, saying that you need as a provider to fulfill certain data governance obligations, they are transparency obligations, human oversight obligations, and so on and so on. But the European union was aware that all those. New obligations could of course stifle innovation in AI and the kind of silver bullet that the European Union had in mind in order to address this potential problem.

[00:09:56] And really to concentrate only on the On [00:10:00] certain AI technologies that should be regulated, they included a risk based approach. And in practice in the final AI act, it means that There are four different categories of AI systems. The first layer, AI systems that are really not in line with European values, with fundamental rights and so on.

[00:10:24] And those systems would be in the future completely prohibited across the European Union. So people could not sell them anymore and could of course also not use them anymore. Then there is another category, a bigger category, so more AI systems will fall under it, that is, are the so called high risk AI systems.

[00:10:48] And similar to the prohibitions, It’s a list of rather specific use cases, or at least sectors, for example, employment [00:11:00] area and in employment, it’s the scanning of job applications or similar things. And for those. The developers need to fulfill some of those obligations that I have mentioned, like risk assessment, data governance, and so on.

[00:11:17] Then there’s a third layer of AI systems that are posing a certain amount of risk, but still have a lot of advantages, and for those, they are just, you know, Minor transparency obligations. And the fourth layer is completely unregulated because it’s just posing a minimal or even no risk and. With this combination of horizontal obligations plus a risk based approach that in theory should just identify risky AI systems, the European Union was hoping that in the end they Even though they are [00:12:00] regulating, they are regulating in a kind of soft or light way and not hampering innovation too much.

[00:12:09] If we managed that in the end, well, it remains to be seen. But in theory, I think it was at least a good idea.

[00:12:19] Luke: That’s really helpful in helping to frame kind of where you all are coming from, right? Because it’s a real holistic outlook from what I’m hearing. And when it comes to the high risk and prohibitive areas, it sounds like you’re mainly looking at cases where there could be, you know, unfair discrimination and things like that applied with the AI level.

[00:12:37] Is that kind of fair to assume? Are there any other risks that are really jumping out at you guys right now that you’re seeing in the market or that you’re hearing people talk to you about?

[00:12:47] Kai: Yeah, yeah, thanks for this question, because it allows me to criticize now also the AI egg. But yeah, maybe to start again with the theory of where we are coming [00:13:00] from.

[00:13:00] Indeed, it’s a lot about consumer protection, discrimination, and so on. So just to give you a few topics for bans or prohibitions. In the future, the European Union is trying to Prohibit things like social scoring certain forms of emotional recognition, the use of remote biomedical identification, especially from law enforcement entities subliminal techniques, or think about certain forms of advertisements that kind of trying to Oh, Convince the consumer to buy a certain product and so on.

[00:13:43] Use cases, the co legislators of the European Union, as I told you, agreed basically that those areas are so much at odds with existing legislation, with our constitutions with fundamental rights [00:14:00] that, yeah, things like that should not happen in the European Union. And then In high risk, indeed, it’s again a lot about discrimination, about consumer protection and so on, employment, education, again things like facial recognition law enforcement, use of AI, also migration, border management, Actually, even things like critical infrastructure is being covered.

[00:14:30] So a lot of different areas. Now, my criticism is that. I would say most people in the European Union would agree on a high level, let’s say, assessment that those areas are probably connected to certain risks that we probably even don’t want to see or at least want to try to protect our society against.[00:15:00]

[00:15:00] But the big issue in the European AI Act is that the evidence for those use cases is rather slim. What I mean with that is that, for instance, in the border management high risk category, there is a use case of a lie detector. used at borders and journalists in, in the European union, we’re actually talking with all 27 member states and they as a border management entities, and they all said, well, we are not using AI.

[00:15:33] So this case shows you that it’s true. At least in some areas, a kind of prediction what maybe happens in the future, but it’s not really about fully commercialized or deployed AI technologies. And this you’ll see in a lot of those prohibitions and high risk use cases, social scoring. I think probably the European Commission had some.

[00:15:59] [00:16:00] Chinese use cases in mind, but in Europe, we don’t really have something like that. And also in China, it was so far only yeah, some tests, they don’t really have a full fledged social scoring technology in place and how the European Union wrote legally the text for the is so broadly that it could happen that a lot of allowed commercial, so legally commercial practices or technologies that are used from banks, insurances, and other sectors, companies, which never had the problem that again, they could use for many years without having issues with the regulators are suddenly falling under prohibitions.

[00:16:52] And only because the regulator was not Spending enough time and making crystal clear [00:17:00] what is in the future forbidden, too risky, and so on and so on. So this I would say is one of the big problems now with the AI Act that, yeah, the use cases are just too vague.

[00:17:13] Luke: Yeah, because I think you’re right at a high level.

[00:17:15] A lot of people wouldn’t necessarily disagree with a lot of this, but there’s also those cases where in the US we saw this where all of a sudden you’re seeing how these things are used over time in ways that aren’t necessarily the companies or entities aren’t necessarily out in front or very transparent about how they’re using these things and they’re not necessarily public facing.

[00:17:35] Features either. So it definitely seems like a tricky area where you got to have to see how it’s implemented and see what’s visible to right to users and businesses, et cetera, like so that you have a sense of actually what’s happening. And it kind of brings me to my next question too, is. I know a lot of folks wonder with this regulation, like, how do you all envision the process working?

[00:17:56] Is this something where companies are having to submit on a regular [00:18:00] basis what they’re doing to regulators for review? Or is it something where everybody’s kind of operating in good faith and they’re checking on these things over time? Like, how does that work with the enforcement or the implementation of this?

[00:18:11] Kai: Yeah, it’s another good question, which leads me to another point of criticism that we are voicing now for quite some time. So indeed, maybe to, to quickly answer and then explain a little bit, it’s a kind of best effort approach. So companies largely. Just need to install a kind of internal governance system, how they are developing and deploying AI technologies.

[00:18:41] And they don’t need to, in most cases, don’t need to tell the regulator or the enforcement entity about each step. But the big issue is that also here, the AI act is extremely vague and many companies often [00:19:00] do not really know what exactly the regulator expect from them when they need to provide certain information also.

[00:19:10] When the national competent authorities, so in Europe, you know, there’s always this problem that we have 27 different member states with different actors with different specifications and so on. In the future, it could happen that similar to GDPRs, to data protection, a German enforcement body is telling one company, well, you should have.

[00:19:36] informed us about what you are doing with this AI technology, because we think it’s risky, but the Spanish authority is really hands off and for them it’s fine. So there could be this big problem again about fragmentation. And even, you know, So there is this best effort approach, companies [00:20:00] do need to tell the enforcement bodies about certain advances in technology, and they need to provide a lot of documentation if enforcement bodies ask them to provide information.

[00:20:18] So you basically need to be ready. All the time, if the regulator is asking, and this, of course, is better than an obligation to be required to provide something before you are commercializing a product, but still, it’s not nothing. It’s not that you are completely free as a company to innovate. Especially combined with this legal uncertainty about how to do certain things, how to write your technical documentation, how to do a risk assessment, if it’s, for example, enough to do a risk assessment before commercializing technology, or because [00:21:00] AI technologies are evolving over time, you need to Do new risk assessments or you need to do updates on your technical documentation all the time?

[00:21:12] What is all the time? Do you need to do it every week? Do you need to do it every month? And so on and so on. What is the substantial modifications that could trigger This obligation to redo or to do something completely new in terms of data governance, risk assessment, and so on. All those questions are completely unsolved and unclear.

[00:21:35] And this is why we all feel, or many people in Europe feel, especially for the next two, three, four years, the AI Act will be a very, very big headache for a lot of especially SMEs and startups that are lacking the capacities of Microsoft and so on when it’s about compliance.

[00:21:57] Luke: Right. I think that’s super helpful.[00:22:00]

[00:22:00] It reminds me a lot of when GDPR rolled out too, when it was implemented and kind of brings me to my next question around this too, where there was kind of this initial, okay, that’s a Europe thing. You know, at least in the U S and then all of a sudden, I think right around when the actual rollout happened, you had a lot of the major publishers in the U S saying, Oh my gosh, actually this applies to us too, because our users could be users from Europe.

[00:22:24] We’ll be using our websites and things like this. Like, is there a similar type of reach with this way that. You guys are approaching the AI Act where it’ll, regardless of whether you’re based in Europe or not, if you have users in Europe, it’s still going to apply, is that, is that a fair read on it from your point of view?

[00:22:41] Yes, it is.

[00:22:42] Kai: So basically if you want to see it more concretely you need to read in the AI Act, article two, paragraph one. Which is actually to a large extent, a copy paste of a similar article of the GDPR to your [00:23:00] question. So it’s really similar. Of course, one thing is privacy laws. The other things, the AI Act is product safety legislation, but there is this extraterritorial effect of both laws.

[00:23:14] And. If you really check the AI Act in detail, it’s actually going much further than the GDPR. There’s one, let’s say, yeah, specific case in this Article 2, Paragraph 1, that I was talking about, that Is saying that if you, for example, as a Canadian company are just producing AI technologies for the Northern American markets or Canada, United States, maybe Mexico.

[00:23:47] But the output of your AI technology is coming to Europe, maybe because of a tourist, maybe because of a downstream company that is using this AI system [00:24:00] for something in the European Union. Then you are. Coming in the scope of the AI Act, in my personal opinion, this is a violation of trade agreements and also of international private law.

[00:24:16] But here in Brussels, a lot of lawyers. have very different opinions, I think the jury is out. I’m very confident that my reading is right, but again, depending on who you ask, they are different opinions. But I really believe that this point that we are now talking about the extraterritorial effect of the AI act.

[00:24:40] is likely being discussed in front of the ECJ, in front of our European Court of Justice, or there will be at least some additional guidelines on it, how to understand it, and so on, because also here, it’s a big question mark, because it’s really something new. We, we never had [00:25:00] something that far reaching.

[00:25:02] Luke: In some ways, we saw this with GDPR too, where it kind of forced it. It’s a forcing function, especially for us Wild West folks out here, right? Where we’re in some regards, it was, it was really nice to see these things implemented because it brought, at least with GDPR, it really brought privacy to the forefront of things coming from Brave’s point of view.

[00:25:22] I mean, we were really early with bringing privacy software to market. And so it was one of those things where. Seeing that fall into place is there and it’s similar with AI where we’re getting pretty early in that space too. And I think that these are important things that people should at least have discussions about whether how it’s implemented or even enforced for that matter, I think, or like you said, an unsolved necessarily, but time will tell, right?

[00:25:46] Just kind of curious too, like, were there any major lessons learned from the GDPR rollout that you all applied to the AI Act, or is it still too early to tell on that front?

[00:25:57] Kai: Yeah, so we were talking now a lot [00:26:00] about, let’s say, negative points of the AI Act. I do believe that certain things are actually rather positive, or at least the EU regulator, yeah, made some important observations and really tried to do things differently now.

[00:26:19] Yeah, maybe just let’s say three points in order to back up what I just said. First of all, Compared to the GDPR, the European AI Act is much more future proof, I would say, and at least it’s more inclusive. So we have a lot of possibilities For stakeholders to get engaged to, yeah, talk with the European commission, with national competent authorities in the member states, and basically to tell them, okay, look, certain parts of the AI acts are not working or there are [00:27:00] certain technological advances and therefore certain parts of the AI act need to be updated and so on and so on in the GDPR, as you know, Those things were not possible, and yeah, it’s one of the big issues that we have with the GDPR right now is that it’s already now really outdated, let’s say.

[00:27:24] With the AI Act, I think, at least partially, it will not happen because It will be constantly updated. Another positive point is that going back to what I said at the very beginning, there is strong, yeah, international alignment, at least the core principles behind the AI Act are going back to all this prep work that has been done in the 2010s on international level.

[00:27:56] And therefore, the AI Act is also, compared to a [00:28:00] lot of other digital laws, rather principle based. And a third point is on governance. It’s not perfect. I talked already about this fragmentation between yeah, between member states when it comes to interpretation of the AI Act and so on and so on. But at least.

[00:28:22] To some extent, the European Union with AI tried now to really learn from GDPR mistakes when it comes to fragmentation, when it comes to underfunded member states, authorities, and so on and so on. And also here we put in a lot of new mechanism that should prevent certain problems with the law. And also we created with the AI office, a centralized governance body that should be there to, yeah, to take care of cross border cases [00:29:00] that also tries to align interpretation, tries to help member states to better connect with each other and so on.

[00:29:10] So. All those three points, and there are many more, I would say the EU actually did a good job, and yeah, back to your question, really tried to learn from previous mistakes, but yeah, we are not there yet, so there are still problems.

[00:29:26] Luke: Great, great point too, and I think one other thing too that seems to be in your favor a little bit more with the AI space than the whole GDPR side of things is that the time.

[00:29:37] That this is getting addressed, right? Like AI is, it’s been around for a long time, but, the, the way we’re seeing it, you know, iterate and proliferate now compared to, you know, by the time GDPR got implemented, you had a pretty dominant cohort of, advertising tech and big tech companies that had really taken grip on how things were getting monetized to [00:30:00] where seemed like much more of an uphill battle with GDPR, whereas with the timing now, you still got big tech companies that are trying to kind of find.

[00:30:08] product market fit with AI and it’s much less matured in market compared to where like, let’s just say digital advertising or publishing was when, when GDPR was rolled out where you really had to kind of, the fight was intense against entrenched interests, right? How confident are you in the ability for enforcement actions with the current authority and, you know, in the ability to kind of enforce on companies in this space?

[00:30:35] Because one thing we saw with GDPR that was a bit of a challenge was that the law was there, and in even some cases you had authorities that were saying that, The law was being, you know, not being followed, but it was really a challenge to get people either educated or motivated to enforce the law, right?

[00:30:51] Or, or, you know, find remedies that were practical. Like how’s navigating that going with the AI act? Yeah.

[00:30:58] Kai: So as I said, [00:31:00] we definitely installed certain mechanisms that should help to prevent certain effects or certain scenarios that we saw with the GDPR. Right. The big issue is, I would say that the AI Act is creating such a complicated ecosystem, governance ecosystem that requires actually a lot of internal investments by national governments or, but also by the European commission.

[00:31:31] So if you look, for example, to the United Kingdom, I think they now have way over 200, I think maybe even 250 people. Working in the AI Safety Institute, this is quite striking, this number, because the UK doesn’t have an AI law, but they have 250 people. We, in the European Union, have an AI law, but our commission has only, right now, [00:32:00] I think, 85 people working in the AI office, so How can it be that there are half the people and even though we have a horizontal AI act, the commission wants to include 40 more people.

[00:32:15] But then, of course, there are still over 100 people behind London and the UK. The commission has already huge problem, problems to find experts, deep learning experts, ethical AI experts, and so on. As you know, those people could get much more money at a company like Microsoft, Google, and so on and so on.

[00:32:39] How do you bring those people into the commission? If you then compare the commission and maybe member states like France and Germany that still have some money, With smaller member states like Malta, like Estonia, and maybe also member states that have battery problems like Bulgaria, for [00:33:00] example, or Greece.

[00:33:02] How are those countries managing to find enough AI experts in order to adequately execute and enforce the AI Act. I see there a huge problem. Maybe we will be able to fix this problem over the next four or five years. But until then, I would say our whole governance ecosystem is probably dysfunctional because there are just not enough experts in the public sector.

[00:33:33] And now we were only talking about the public sector, Actually, the same applies also to the private sector, because first of all, for product safety, you have basically in Europe conformity assessments, third party auditing, at least to some cases. So. Companies like McKinsey or Deloitte S and Young and so on and so on, they will [00:34:00] provide certification and even more, we have notified bodies in Germany.

[00:34:04] It’s decar and tur that are checking basically AI technologies. In most cases, it’s. Voluntarily, but probably because of the legal uncertainties that we are talking about, talked about many companies will ask for this further safeguard that they are doing the right thing. But of course, all those companies and all those notified bodies will need AI experts.

[00:34:32] Every company actually needs probably an AI officer, similar to the data protection officer. And again. If you are looking in our educational systems, they are not producing that many AI experts. Where do we get them? And I guess again, it will be huge international competition about a small number of available AI experts.

[00:34:59] And probably the [00:35:00] best at least will go to large tech companies because they are paying most and probably they have also more freedoms compared to a very restrictive public entity in Europe, or for example, in Germany, where we are normally following the rules and the individual doesn’t have a lot of flexibility if he or she is working in the ministry.

[00:35:25] So there I really see the core problem, especially as I said, over the next few years.

[00:35:33] Luke: Yeah, definitely a lot of challenges ahead, right? Well, one thing I’ve seen after interviewing a lot of thought leadership and a lot of folks in the space is there seems to be a real, whether they’re soft approaches or hard approaches, I’m seeing things from academic, from academia, crossing over into the commercial side where a lot of the researchers and the data specialists and data scientists actually are trying to be pretty mindful about it.

[00:35:57] Like, and maybe it is lessons from GDPR that [00:36:00] they’re softly approaching or something like that. But it It’s been surprising to me, at least to see the level of care that people have, where it’s not necessarily visible to the public, right? But when you sit them down and ask them questions about it, they really seem to, they’re thinking about a lot of the same issues that you’ve brought up here, which I think is there’s some alignment.

[00:36:17] I think whether where the rubber meets the road, I think we’ll see. But, but that’s really interesting to hear those challenges because people have definitely seemed to have to take a long view on how they’re going to approach this. Even with forming the regulation to touch on GDPR side a little bit. I know that there are some revisions or changes being considered.

[00:36:37] Can we drill down to a little bit of what, what those look like from your vantage point, as far as you mentioned, the things are a little outdated. Are there revisions coming in and what revisions do people expect to see?

[00:36:47] Kai: Yeah, so there is now one ongoing legislative discussion, yeah, with regards to, to the enforcement issues that we partially discussed already, yeah, for example, [00:37:00] cases like where there is an issue with a large U.

[00:37:04] S. tech cooperation, but this tech cooperation is headquartered in Dublin, in Ireland, and because most companies, most large companies are headquartered in Ireland, the data protection authority from Ireland is overwhelmed with requests and so on. And as therefore this new legislation that is currently being discussed is thinking a little bit more about how to.

[00:37:33] Better use the consistency mechanism in the, in the GDPR, how to speed up certain processes and yeah, basically how to involve also all those member states that are interested in this case which have a lot of end users that are being affected. But That are not Ireland, basically, at the moment, it’s looking like that [00:38:00] maybe a small potential is there that at the end of the year, there could be already a political deal, a little bit more likely it will happen then in quarter one, and next year, if this reform is really groundbreaking, or is really changing some of the major obstacles with GDPR, well, we need to see, but at least I would say it’s a good step forward.

[00:38:29] And then there’s this big question about what else? So indeed, there are some people here in Europe that are now pushing for quite some time for a larger, really comprehensive GDPR revision. The new commission that was just being reappointed. It’s so far not showing much of an interest of a really big GDPR revision because they are thinking, [00:39:00] yeah, in the current political atmosphere, it’s probably not wise to reopen it.

[00:39:06] What the commission, And people that are in the same camp are saying that maybe similar to this GDPR enforcement regulation that is targeting one particular issue, maybe there’s room for a similar Specific GDPR revision so that one other particular problem is kind of identified. And then there’s a very, very targeted revision in order to solve it.

[00:39:37] But yeah, the jury also on this point is still out. The commissioner hearings are now over. As I said the commission is now officially appointed, can start to work. And with all those new people in the commission, a lot of. Commissioners are now there for the first time. There is always the potential that [00:40:00] someone wants to make a name and basically try out something new, do a big bold step forward.

[00:40:07] And yeah, I actually saying, no, we need a GDPR revision. We will see it probably in the next six months, if something big is happening, or if it’s more. What I said at the beginning that we maybe see a very, very targeted revision of another point. A huge question mark in this area is the ongoing e privacy discussion.

[00:40:33] As you know, e privacy, the e privacy directive is applicable for many, many years. There was an attempt to update it with a regulation, but the discussions are Kind of stalled for many years now. And actually there are many people that are either saying the commission needs to provide a new proposal, a completely new proposal, or to [00:41:00] basically divide the privacy regulation proposal into three parts, one part on online advertisement could be a standalone EU law.

[00:41:10] So a new law, maybe then there’s something separated. Good. On cookies, maybe another cookie pledge or something like that. And let’s say the data protection related parts could be actually included in the GDPR. If this is happening, then I think we will see a big GDPR revision, because This e privacy part, you need to basically reopen everything.

[00:41:40] So yeah, with everything that I said now about that, you see, there are a lot of question marks in which direction it’s going.

[00:41:48] Luke: Yeah. Yeah. No, no worries. It, these are. Pretty complex and complicated issues. I yeah, it reminded me of a person making a name like we used to work really closely with Johnny Ryan and [00:42:00] he’s who I immediately go to when I think about this, cause that was my, my question around the enforcement was mainly a lot of this, he ran into those issues where, you know, even in some cases, like the ICO would say, yeah, this is a.

[00:42:11] Looks like a, it’s in violation or something, but it was just like, well, what can we get done? And then, you know, these companies have so much power, you know, it’s just you know, you even see them moving on issues like around cookies and things like that. It’s an ongoing battle for sure. But no, this is like a really helpful clarity.

[00:42:28] One more point too, I think because I know you’ve been super gracious with your time, and I really appreciate it. I mean, I think in the beginning of the conversation, you set a good point around almost like making making a standard or, or, or a quality, raising the bar on quality for things, right?

[00:42:43] Like you know, what is your main response to critics who argue that, These regulatory approaches are putting Europe at a disadvantage compared to other regions. And I think you might have answered this a little bit earlier.

[00:42:54] Kai: Yeah. Yeah. This actually became a kind of passionate topic for [00:43:00] me because I, as I said, at the very beginning, I’m really a person.

[00:43:06] which characters all about pushing for change, pushing for reforms in order to improve things and so on. And I truly believe that many of those laws that we have adopted as European Union over the last years had A really good intention as a policy objectors were very, very, very smart and correct and so on and so on the big problem or two big problems of the European Union is that those initiatives were often not effective Coordinated at all, so they were all created in, let’s call it, yeah, policy silos or work silos.

[00:43:50] Therefore, yeah, we, we tried to address certain problems with platform workers in the platform work directive, but we [00:44:00] didn’t care about overlaps with the GDPR, with the AI Act. So what does it mean? Large companies will always find a loophole, can always claim, well law number one is actually allowing us to do a certain or to perform a certain activity.

[00:44:17] And yeah, law number two might be problematic for us. But again, law number three is actually quite unclear. So in a, in a, let’s say, overall assessment, you cannot ask us for more. And this we see a lot that this basically uncoordinated high level of legal activities of the European Union, in the end, Really didn’t help us a lot.

[00:44:44] Most of our policy goals we didn’t really achieve. And yeah, I really hope that in the future we are doing better. Maybe we are also reducing the number of laws, merging them to [00:45:00] bigger frameworks. Yeah, also investing, and this is the second big problem, much more into enforcement in the implementation, because policymakers, no matter if from the Commission or from the Member States or from the Parliament, they really focused on negotiating and drafting laws, but once they it became adopted, they didn’t really care about it anymore because, in political terms, it’s not really sexy to talk about how to implement a law, if there are certain paragraphs that need to be improved, and so on, you cannot Do a nice LinkedIn post about it or Twitter post or whatsoever.

[00:45:44] And they are, we actually also need to change. We need to focus much more on the enforcement going back to our AI governance point or discussion. We need to invest also a lot of money to, yeah, to build up the. [00:46:00] required enforcement or governance structures. If we are doing both of those points, so refocusing on enforcement, but also better coordinating our activities, then I truly believe that the European Union could build up this kind of International brand of trustworthy digital solutions coming from the European Union or made in Europe and also giving our companies kind of enough legal certainty to heavily invest in, in certain areas to build up products that are in line with our values and so on.

[00:46:46] And actually. Many researchers that I’m working closely with, they are making a link to something that is called legal coding. So using technical harmonized [00:47:00] standards from San Senelec or internationally from ISO or some technical reports from Things like Gaia X, cloud management, and so on.

[00:47:12] Basically, if we would use those instruments, not to replace law, but to specify law, I think we could. Again, make it for companies much more easier, basically to, to implement certain rules or certain policy objectives already at a design level. But yeah, unfortunately this is all kind of theory right now because again, as there is such a big lack of enforcement, such a huge issue with uncoordinated laws, which both are Leads to high compliance costs, a kind of legal chaos.

[00:47:56] So we are very, very far away from a situation [00:48:00] where companies actually could create this competitive edge by making trustworthy technologies coming from Europe. So there is potential, but right now, a larger problem that is preventing us from fulfilling this potential.

[00:48:17] Luke: Yeah, no, no worries. I, there’s a lot to chew on here, but I, I really, I want to commend you on the work and kind of approaching us with a broad and holistic approach and long road ahead on these things.

[00:48:28] Like I said earlier, in anything, it brings these points of discussion up for, for discussion. Folks to consider who may not have been considering them especially with such broad applications. And Kai, you’ve been super gracious with your time today. We covered a lot of ground, so much ground, these, these are really complex issues.

[00:48:44] And I really want to thank you for, for coming on. Where, where can people follow you online if they have additional questions or want to follow your work and keep tabs on, on how things are going with these topics?

[00:48:55] Kai: Yes, actually, I mentioned already policy makers [00:49:00] and LinkedIn and Twitter, our office also my colleagues and Axel Voss, we are heavily actually yeah, announcing basically what we are doing via LinkedIn, via Twitter.

[00:49:11] Also, I personally have a blog or website, basically my name, so kaisener. eu. And there you basically see everything we are doing. We are constantly also asking stakeholders for feedback. We are very often doing some kind of consultations because we are really believing in engaging with stakeholders.

[00:49:35] stakeholders to learn from them, what the challenges are and so on. And yeah, if you, if you have a good idea and you want to speak with me about it, just send me an email or just send me a private message via those platforms that I have mentioned.

[00:49:52] Luke: Fantastic. Well, we’ll be sure to keep those in the show notes so that people can do just that.

[00:49:57] Again, I really appreciate you making the time today. We [00:50:00] covered a lot. Love to have you back on to, to check back in on how things are progressing over time, if you don’t mind.

[00:50:05] Kai: Gladly. Let’s do it.

[00:50:07] Luke: Excellent. Well, thank you very much, Kai. Thank you, and ciao. Thanks for listening to the Brave Technologies Podcast.

[00:50:15] To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com and start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • Challenges with the AI Act (such as vague use cases, balancing innovation with regulation, and the impact on SMEs)
  • Lessons from GDPR, including upcoming changes being considered that could impact data privacy
  • Horizontal legislative approaches and their implications
  • Future prospects for AI regulation in Europe

Guest List

The amazing cast and crew:

  • Kai Zenner - Head of Office and Digital Policy Adviser for MEP Axel Voss

    Kai Zenner is Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament, focusing on AI, privacy, and the EU’s digital transition. He is involved in negotiations on the AI Act, AI Liability Directive, ePrivacy Regulation, and GDPR revision. A member of the OECD.AI Network of Experts and the World Economic Forum’s AI Governance Alliance, Zenner also served on the UN’s High-Level Advisory Body on AI. He was named Best MEP Assistant in 2023, and ranked #13 in Politico’s Power 40 for his influence on EU digital policy.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.