Back to episodes

Episode 44

Human-Centered AI: Uniting Academia and Industry for Ethical Tech

Hamed Haddadi, Chief Scientist at Brave Software, discusses the future of ethical and inclusive technology, and how a human-centered approach can transform AI development’s impact on society. We also discuss how to navigate the complex dynamics among academia, industry, and policymakers in crafting ethical standards that safeguard citizen rights.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:29] You’re listening to a new episode of The Brave Technologist, and this one features Hamed Haddadi, who is a professor of human centered systems at the Department of Computing at Imperial College London. In his industrial role, he is also the chief scientist at Brave Software, where he leads our research team.

[00:00:43] In this episode, we discuss the journey of the Brave research team, the importance of having a research team at Brave, the pairing of academia and industry, and why that’s important in technology. Different areas of research at Brave. And the importance of human centered systems. Now for this week’s episode of the brave [00:01:00] technologist, Hamad, welcome to the brain technologist.

[00:01:05] How are you doing today? Excellent. Thanks very much, Nick. How are you? Good, good. Looking forward to having you on. Can you give us a little backstory on, like, how you ended up getting into what you’re doing, and what particularly in your background helped you get where you were?

[00:01:19] Luke: Yeah, so from childhood, uh, I’ve, I’ve grown up in a, um, In a, in a working family.

[00:01:24] And, uh, we’ve, I’ve always been interested in building things, putting things together, and then I got into putting electronic circuits together and all of that, and, uh, I’ve always been fascinated by use of technology in everyday life. So slowly that got me to doing more work on, uh, electronics and electronic engineering, and that gradually led me to computer science and.

[00:01:46] Then I did a couple of degrees here in London, um, and then in Cambridge, uh, on my, my PhD, generally in electronics engineering, telecommunications engineering, and then computer science. I’ve always also had an [00:02:00] interest in working in applied settings. So during my My life either as an undergraduate student or postgrad, I’ve always done internships and times at industry.

[00:02:11] So I’ve worked in Sony at Intel in AT& T. I’ve spent time with Microsoft Research. So I’ve always been keen to, uh, Basically keep my research and my scientific work as realistic as possible. Um, and in the last six years, uh, I’ve been working with, uh, with Brave on, uh, developing latest research and, uh, technology for our users.

[00:02:38] Awesome. How did you kind of get into, uh, working at Brave? So I am a professor at Imperial College London, and one of my colleagues at the time, Ben Lifshitz, was the Chief Scientist of BRAVE, so I spent some time working with him on previously preserving advertising, and even before that, um, things. About [00:03:00] 15 years ago, I had been working in the space of private analytics previously in advertising and also doing work on, for example, fraud detection in online advertising and generally the old web.

[00:03:13] So I had engaged quite a bit with folks in this space. And yes, slowly that led to me spending part time work as a visiting professor for a while. And then. I, uh, basically took, uh, the lead of, uh, the research team at Brave about three years ago.

[00:03:31] Luke: What is kind of the work that you’re doing in research at Brave?

[00:03:34] And why is it important that Brave has this research team? That’s an excellent question.

[00:03:39] Luke: Not many startups Even scale ups can can afford a research team, and many of them do this because the work that we do, for example, at Brave or companies like Brave, it’s really at the cutting edge of science and technology, and sometimes the latest developments [00:04:00] really need you to spend science.

[00:04:02] Quite a bit of time doing experiments, developing stuff and maybe kind of putting it out there in the in the community and seeing if the community accept that, whether, for example, if it’s a security protocols or or previously protocols, you cannot just. Roll out tech and hope for the best. You need to have open source code.

[00:04:24] You need to have done the research, have done the experiments, demonstrate that something works and something can be audited. It can be verified. So especially in the space of machine learning, for example, you have topics like Auditability, uh, explainability, uh, fairness, and then, for example, in security and privacy, you have to have, uh, you have to have verifiability.

[00:04:47] So this is a lot of the time of the research team is spent on, uh, developing latest technology and working closely with the scientific and academic community to make sure that this technology is, [00:05:00] uh, it can be trusted, basically, before we roll it out to users.

[00:05:04] Luke: Have there been cases that where you all have kind of taken something from that research stage, like from a white paper or something like that, all the way to like it now being used in Brave that you could give as an example?

[00:05:16] Luke: Yeah. So there have been, um, quite a few works, for example, on, uh, detecting latest ways of working against, uh, ad blockers or detecting, uh, malicious contents on the, on, on the webpage. So we’ve been working, for example, uh, a product out of research called page graph has been, um, in development. For quite a few years in various iterations, going from looking at the objects on the web page and how they link together to form a graph and looking into actual content.

[00:05:46] So, for example, JavaScript on pages. So that’s one example. So in the ad blocking space, So how we fight basically, and this is a, this is a constant, uh, kind of a tug of war that the analytics, uh, [00:06:00] industry or the ad industry tries to find new ways of tracking the users, new ways of placing cookies. And, uh, you kind of have to keep developing, uh, technology to defeat that.

[00:06:11] The other example is, for example, private analytics in order to understand how people are using the browser, what sort of things break, what sort of things frustrate them, what sort of interactions do they have. For many browsers, this is very simple because they just collect every single action that the user takes, and every single click is monitored, every single navigation is monitored, and all that data comes through.

[00:06:33] For brave as a user first and as a as a private browser, this is really not an option. So we’ve been working for quite a few years and always having the best and latest and the most private analytic system possible in order to make sure that we Do not have any way of even for brave re identifying users or even re identifying small cohorts of users in different [00:07:00] parts of the world or in different user cohorts.

[00:07:02] So these are just some of the examples. The same goes for using latest machine learning, for example, for detecting fraud on the web and detecting, for example, page breakages

[00:07:15] Luke: in a private manner. You said this Nebula release, right? That’s right. That’s on the privacy preserving analytics side, right?

[00:07:22] Luke: Exactly. Yeah. So Nebula, for example, uh, uses various levels of protections, including some previous work at Brave called STAR, which was also published in the top security conferences. And that kind of ensures that small cohorts are basically not identifiable. And on top of that, uh, what we do, we have a Nebula, which, uh, uses a differential privacy, which is a way of injecting just enough noise in a dataset that Basically, you cannot, uh, basically my presence or absence in the data set doesn’t make a big difference.

[00:07:58] So in a way I become, uh, [00:08:00] kind of unidentifiable. We’ve just rolled that out and that’s gone, uh, gone out to all the users and that, uh, that work has also, uh, been submitted for, uh, for review at the top conference. So we do this all the time, uh, and this, and there will be similar, uh, similar efforts, for example, uh, in new products that we will be developing.

[00:08:22] Luke: Yeah, and I’m sure there’s like, if I just look across the brave stack of things, there’s just a lot of different types of technology to what technologies today, like it, you’re most excited. Or are you guys putting a lot of focus on now,

[00:08:34] Luke: this space is really rapidly moving. So latest advanced advances in cryptography, for example, in privacy protocols really led us to Do things that we couldn’t necessarily do a few years ago, for example, rolling out really private and verifiable, for example, wallets, for example, or add attributions, for example, in terms of latest advances in [00:09:00] machine learning, what we’re doing is.

[00:09:02] Using the existing knowledge, which is on the web and using the existing knowledge, which is on our support, for example, data set on databases, and we can enable live interaction with the user when they have a support query, and we can automate that in a way that it relieves the pressure on the support team, for example, but it allows the user to really have a Detail feedback on a specific action that they want to take in the browser and hopefully with, uh, with things like on device models and advances in, uh, in rags, for example, we can look into even automating some of these tasks so the user can just express In a natural language, that’s, uh, Oh, I want to organize my tabs, uh, by topic and all their tabs just automatically get organized for them.

[00:09:52] So they don’t need to even click some button. So that’s kind of the

[00:09:56] Luke: direction of that we are, we are moving towards. You mentioned [00:10:00] on device models there, so I got to ask you about that because it’s something like I’ve always been super even from when we were doing, you know, the advertising model at Brave and trying to do this local device.

[00:10:10] So much attentions on these, like, large language models in the AI space right now. What are some of the things that you guys are looking at for on device modeling? And what are some of the challenges that you all are facing and trying to have us develop those at Brave?

[00:10:25] Luke: Yeah, luckily there are a lot of good models, uh, being released openly at the moment and, uh, there is a race between a few kind of large model providers in having the latest and best models out and on.

[00:10:37] Device models are fascinating because. They let you completely do things in a private manner if you, if you think about it, that if I can have, for example, think about the version of Leo or a version of, uh, other, uh, popular LLMs, uh, sitting locally on my machine, I can, uh, I can do all my, all the, all the analytics, all the AI tasks [00:11:00] that I want to do, or even all my whatever, email writing and email summarization and messaging and social media and whatever locally.

[00:11:08] So without all the privacy risks, the challenge here is that These are, at the moment, these are mostly quite big models, even in size. So the memory of a device at the end of the day, or the memory a mobile phone or even a desktop or a laptop is limited. So I cannot, uh, ship for every single task. I cannot ship a three gigabyte, uh, model to the user because at some point their device is just going to be full and their battery is going to be drained in an hour.

[00:11:36] So they would, they, they need to be constantly connected, uh, to a charger. Like back in the days when we used to carry our chargers around with us. So these are some of the challenges and of course, uh, what happens is that usually, uh, so think about search. So one of the nicest things that we have in Brave is, uh, like having our own private search engine.

[00:11:57] So think about, I want to do some interaction or I [00:12:00] want to do some fact finding the local model on the device. It might not necessarily have the latest, uh, news articles, for example, or the latest social media posts. So if I can augment that. local model with the knowledge that comes live from search in a private manner, then I can, I can do a lot more.

[00:12:19] So, uh, these are some of the exciting things that not only researchers at Brave, but also the search team together, uh, we are looking into.

[00:12:28] Luke: That’s awesome. That’s really exciting to hear about. And yeah, I think, you know, people are looking forward to seeing how that goes. I’m sure, too, as devices kind of get better and better, too, over time, it’ll open up more doors.

[00:12:39] Exactly. There’s a lot of discussion around, like, uh, human centered systems in the context of AI. Like, how do you define that yourself? And then why is it kind of crucial for emerging tech to adopt this approach? Yeah, I think for decades

[00:12:53] Luke: we’ve been Basically building tech for various industries, for the advertising industry, for [00:13:00] the surveillance industry, maybe even it’s really important to think about where tech takes us next.

[00:13:06] And is this the direction that we want to go? What about the users? What about the human beings? And how would even the data collection and the analytics affects the user? So the fact that I show you certain media today, or the fact that I do some data collection, for example, from your browser or from.

[00:13:25] Your smartphone or your internet of things devices. And based on that, I make some decisions about you, putting you in a certain category, showing you certain social media posts, uh, saying, showing you certain news posts that has an effect on you as a user, as a human being. And this affects our feelings.

[00:13:45] This affects the emotions that we go through as a human being. So we really have to think responsibly about. Where do we go next with technology? How will technology affect us as individuals and as a society as a whole? If [00:14:00] done not carefully, we will really be, uh, in a dangerous situation. So it’s, it’s important to think about this.

[00:14:06] And I think for example, brave with a private, uh, with a, with a private approach and with a previously first view really is taking

[00:14:14] Luke: a good step in this direction. How important is that collaboration here between academia and industry? A lot of advances,

[00:14:22] Luke: uh, in, for example, uh, latest technology, uh, these days come from industrial research labs, actually, because they have good availability of resources like hardware, GPUs, data centers, and all of that, which a university might not necessarily have.

[00:14:38] However, at the end of today, uh, we need to educate always the next generation of, uh, researchers, the next generation of scientists and engineers. And also we need to, sometimes, as an industry,Our primary focus, for example, is optimizing for growth or optimizing for for revenue when developing technology.

[00:14:58] We need to also think [00:15:00] about other aspects of technology. Is it fair? How much bias is this tech introducing in our users or in our society? How do I take accountability into into this whole equation? What about inclusion, for example? So I think Collaboration between academia and industry allows us really to develop fascinating, really blue sky and deep and large projects and research while having a kind of a sanity check on, um, is this the right direction?

[00:15:33] Is this what I should be developing? Uh, basically.

[00:15:36] Luke: Yeah, and you, you mentioned inclusion there and, uh, and some of these fairness and, and things like that in your, your last answer is like, what are some of the biggest challenges you see in, in kind of ensuring ethical AI development goes forward, both in, in academia and in, in the industry and kind of like, in those relationships.

[00:15:54] Luke: Yeah, five years ago, if you’d asked me this, I would have been very disappointed and, uh, [00:16:00] really kind of, uh, I wouldn’t have had a good view towards this whole space. Uh, but I think things have changed a lot thanks to both academics and researchers and industry researchers, and also just global think tanks and, uh, charities and all of that.

[00:16:15] We are thinking more about, okay, what is the impact of this surveillance mechanism that we are developing? this offensive tech that somebody is developing and how do we defend against, for example, cyber attacks and things like that. So, and also even at the larger scale, even, uh, things that come around these days, for example, ethical hacking and all of that, which is, I mean, this is a very complicated space and, uh, different views are, uh, are there, but I think it’s good that we are discussing these things and we are thinking about these things, uh, while developing technology nowadays.

[00:16:50] And the challenge is sometimes is. Really identifying the damage that a specific technology can introduce before rolling it out [00:17:00] to millions and millions of users. So that’s really important, I think.

[00:17:03] Luke: Yeah, yeah, definitely. And I think kind of on that note, too, I mean, you’re seeing more and more kind of steps in the policymaking space, too.

[00:17:11] Are you seeing a lot of engagement between academia and policymakers? And how do you feel about kind of the pace of how that’s going from your point of view?

[00:17:20] Luke: Traditionally, it’s, uh, it’s not been easy, for example, for academia and industry to talk to governments and policymakers. But I think in the last few years, even the availability of new tech, like social media, blog posts, And platforms like that has enabled more kind of real time and more interactive engagement between, uh, between these various stakeholders, including user groups, actually, which now can be vocal and they can get their voices heard.

[00:17:48] So I think really going forward, uh, this space will hopefully be improve. However, it’s always important. Uh, maybe even so technology is not there to police the [00:18:00] society. However, technology can help with bringing more transparency, uh, into the ways that things are happening in the society. And it can, it can just be a, an eye basically, or a kind of a watchtower for, for what’s going on.

[00:18:15] The challenge, uh, I think that we are facing is going to be Determining, uh, what incentives this different, uh, actors have in the space all the way from. Industries, for example, engaging in standardization efforts or industries engaging in rolling out tech for governments and or simply complying with various government requirements that okay, as a social media company or as a browser company or as whatever, do I enable access to user data?

[00:18:47] And luckily at Brave, we don’t have this problem because From design and from beginning, we’ve been thinking about this, uh, in order to just don’t collect any user data, make sure the [00:19:00] users has been put first and their privacy is intact. But yeah, I think this is going to be really a complicated space that a lot of the big techs and AI companies will be seen also as a, as a gatekeeper, basically, in terms of having a lot of data and also being able to reach to millions and billions of users.

[00:19:18] So I think this is going to be an interesting and challenging space.

[00:19:22] Luke: Yeah, yeah, definitely. I mean, especially like when you consider how embedded the technology is becoming kind of at that device level. How much does that kind of concern you now when you start to see things like, you know, the AI that’s integrated in the browser is integrated in your operating system, right?

[00:19:38] That you’re kind of, it’s all seems to get much more closer to that user. Like, are you concerned at all about the amount of data that’s going back into these clouds by these devices and users and all that? Yeah. We do a lot

[00:19:52] Luke: of experiments in my research lab, uh, also at the university. We do a lot of experiments on, uh, inspecting the data, the [00:20:00] data that leaves the various, for example, smart home devices or wearable devices.

[00:20:04] And there is a lot of data going. To many destinations that we don’t even know where they are like hard coded IP addresses, uh, dynamic DNS points and, uh, like basically unknown locations. And this is really the concern. And I think the governments and, uh, many tech providers, many larger tech providers, uh, for example, are kind of acting together to develop some regulations and some standards to start preventing this, because I think in the last.

[00:20:36] 10 years of technology has been really rapidly growing, but without many checks and, um, kind of balances in place that, okay, uh, is this ethical? Where is this data going? Uh, do I need consent from the user or can I, 10 years later, just go to the user saying, Oh, we’ve been collecting your data. And now we are going to train our next model with it.

[00:20:58] And. By the [00:21:00] way, your face might be in that dataset or whatever. I think there’s definitely a concern here. And this is where industry, academia, regulatory bodies, they really need to work closely together to, to make sure that, uh, the potential damages are prevented, uh, before kind of going at a much larger scale.

[00:21:22] Luke: Looking out, you know, next 5 to 10 years. How do you think a human centered AI will impact society? Um, especially with how rapidly things are advancing at this pace. This is really

[00:21:33] Luke: an interesting, but a very hard question. I think, um, the incentives that, uh, uh, we see from different, uh, kind of large corporations, for example, the big tech around data collection, not only for advertising or not only for enabling, um, kind of larger models to be built, but also for, uh.

[00:21:53] Let’s say collaborating with the governments for surveillance or simply obliging to [00:22:00] various government requirements around the world for various levels of data collection. I think this will have an impact in the society in terms of For example, uh, citizen rights, uh, human rights and being able to have the democratic processes in place.

[00:22:15] And we’ve already seen some of the effects of these in recent, for example, political campaigns around the wars, elections and all of that. So I think at least one thing that’s happening is that we have more awareness of the risks that Such rapid advancements in technology can have, and this education and this awareness hopefully will, uh, let us overcome some of the challenges that in the last 5 to 10 years we were just unaware of.

[00:22:42] We need to keep developing. We need to keep building interesting stuff, but we need to think about user privacy. How will on device models, for example, help, uh, preventing leaving, uh, basically just letting data out there, things like that. I think, I think there’s

[00:22:58] Luke: hope that we need to work hard. [00:23:00] Awesome. And before we kind of wrap things up too, I think it’d be kind of interesting for folks to get a sense, like, um, so you have this research team at Brave, right?

[00:23:07] What are the different areas of focus of the team? Like, how’s the team kind of broken up across, you know, the, the research and development? if, if you could give us a bit of a snapshot of that.

[00:23:17] Luke: So the research team is, uh, formed mostly of folks working in the space of privacy, cryptography, machine learning, and security.

[00:23:27] So they closely interact with product teams across different levels, for example, uh, developing new product lines or sometimes If there are specific challenges that, uh, product lines, uh, faces and it needs kind of a more complicated approach or a more, uh, experimental uh, approach sometimes, uh, to see if, if they can find a solution to the challenges.

[00:23:49] So usually the researchers engage directly, uh, with various, uh, product teams. I mean, we try to make sure that we both have. Open science and the [00:24:00] openly published papers. And we put our, uh, for example, if we design a new analytics system, like Nebula, or if we design a new, uh, cryptography system, uh, for example, for fraud detection.

[00:24:13] We put these things out there for the society, for the research community to be able to vet and audit and go through. And also it’s a contribution overall to the scientific community, even though we have a small team of researchers. We try to make sure that our impact is beyond just the research community.

[00:24:31] Brave as a company and we want to make sure that, uh, we contribute to the wider scientific community as well. And we also have an internship program. So every year we get, uh, several PhD students who come and spend three months, uh, mostly in the summer with us. And that, that helps us interact more and engage more also

[00:24:50] Luke: with academia as well.

[00:24:52] Awesome. Is there anything we didn’t cover that you want the audience to know about?

[00:24:58] Luke: Nothing specific, just that, [00:25:00] uh, we are always open to discussing latest, uh, advancements in science, uh, to new experiments, new feature requests from the audience, from the users. And we are always happy to hear new ideas.

[00:25:12] The crazier they are, maybe the more exciting they will

[00:25:15] Luke: be. Awesome. Awesome. Well, Ahmed, I really appreciate you taking the time to join us here today. Um, where can people find you if they want to follow along with your work or where you’re posting online? very much.

[00:25:25] Luke: Sure. Yeah. I mean, our, we have a brave.

[00:25:28] com slash research webpage where we put all our papers and all our code and all of that. And also on social media, I am at real hammer. So I’m on the eggs and LinkedIn and all of that, but yeah, it’s relatively easy to find me.

[00:25:42] Luke: Awesome, man. Well, thank you so much, Ahmed. I really appreciate it. Love to have you back too, at some point to kind of go over any new stuff that you all release and want people to know about.

[00:25:50] Thank you very much. Have a great day. Thanks for listening to the Brave Technologies podcast. To never miss an episode, make sure you hit follow in your podcast app. [00:26:00] If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com and start using Brave Search, which enables you to search the web privately.

[00:26:08] Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • The importance of human-centered systems
  • AI’s role in preserving democratic processes
  • The purpose of Brave’s research team
  • Future of ethical and inclusive technology

Guest List

The amazing cast and crew:

  • Hamed Haddadi - Chief Scientist

    Hamed Haddadi is the Professor of Human-Centered Systems at the Department of Computing at Imperial College London. In his industry role, he is the Chief Scientist at Brave Software, where he leads Brave’s research team.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.