Back to episodes

Episode 83

AI Safety, Scam Tactics and Threat Mitigation

Assaf Kipnis, AI safety (intel and investigation) at ElevenLabs, discusses the evolving landscape of online safety, the sophisticated tactics of threat actors, and the role of regulation in shaping tech company responses. He also discusses the need for accountability in both tech companies and regulatory bodies to enhance safety and security in the digital space.

Transcript

Luke: [00:00:00] From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist Podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Moltz, VP of business operations at Brave Software, makers of the privacy respecting brave browser and search engine.

Luke: Now powering AI with the Brave search. API. If you’re listening to a new episode of The Brave Technologist, and this one features AK Kni, who is this AI safety investigator, with over a decade of experience at companies like LinkedIn, Facebook, and Google. Now at 11 Labs, he’s builds systems to uncover and respond to emerging threats and generative ai, focusing on the intersection of security, abuse, prevention, and human impact.

Luke: Oph is passionate about reclaiming technology as a force for good, creating environments where people feel safe, seen and valued. In this episode, we discussed how online safety has shifted [00:01:00] over the last decade, including big players causing the most harm right now, and what’s most pressing new tactics that threat actors are using in the scam space and the role of law enforcement and regulation in pressuring companies, platforms, and teams to improve online safety.

Luke: And now for this week’s episode of The Brave Technologist,

Luke: as welcome to The Brave Technologist. How are you doing today? I’m good. Thanks

Assaf: Luke.

Luke: Yeah. I don’t get to talk to very many people in your role very often, so super excited to, to have your point of view on and kind of dig in on, on the topic a bit.

Assaf: Yeah, I’m excited to to talk about it. I like talking about this stuff.

Luke: Awesome. You know, you spent over a decade inside of some of the world’s biggest tech platforms. From your point of view, kind of what shifted most in how you think about online safety today?

Assaf: So looking back at, let’s say like eight years ago, seven years ago, like before I [00:02:00] started at Facebook, my perspective was there’s bad actors on the platform.

Assaf: We need to take them down. So if we find bad activity, we remove the, the accounts that are creating the bad activity, and we move on with our lives, or we move it. In, in bulk. Find everybody else that’s doing this and remove it. What I learned over time is that when we do that, we don’t really get to the core of what’s happening.

Assaf: We’re just kind of cleaning out the symptoms. As the team that I was on was growing and kind of changing perspectives, we moved into something that was called that, that is called threat mitigation. The perspective around threat mitigation is. How do we find where the bad accounts came from? I used to always call it like, what’s the door that they came through and how do we use rules and engineering or leverage rules and engineering in order to make it so?

Assaf: The actors can’t come back through that exact vector. It’s really working [00:03:00] much, much more closely with engineering, with machine learning, refining the classifier, creating rules that are derived from the intelligence that the investigation teams provides.

Luke: Sounds a lot more, uh, holistic right than, uh, cat and mouse’s, less of a hammer approach.

Assaf: It’s exactly what it is. A lot of trusted safety is a lot of people with tiny hammers. Mm-hmm. And, and large companies can continue to grow these teams and play around with the tiny hammers, especially in, in more agile companies and in more, and in smaller companies, you just don’t have the breadth to that you need to, you need to fix things holistically.

Luke: It is tough sometimes when thinking about safety threats, when the info spaces can be so binary or hyperbolic at times. Are the threats maturing a lot in their nature? I might

Assaf: have a controversial opinion about this. I, I honestly, and I’m always [00:04:00] open to be wrong, I honestly think that at the core of the abuse.

Assaf: The abuse stays the same. What the actors want to get is money, and they have had their ways to get money online for a really long time. Every time they just adjust to the new surface. So if you have. YouTube. So you adjust to YouTube. If you have TikTok, you adjust to TikTok. But in the end, the the abuse is very, very similar.

Assaf: A couple of thoughts beyond that. One of the spaces that have, even before AI that has really taken off is the scam space. The scam space that a lot of us used to know is, oh, the Nigerian prince is gonna send you an email, and it’s kind of silly at this point, and it became a meme. But around 2020 when people started to work from home.

Assaf: Also the scammers had to start working from home, which is actually a big pivot point for them, which we can talk in a different time. They started becoming a lot more sophisticated, much more, well, not all [00:05:00] of them, but a good amount, got a lot more sophisticated because this became their revenue stream.

Assaf: And it came down to large organizations, large scam organizations that are doing this at scale. And once you have sophistication and scale, you just get better. And I don’t think the industry has been very good at. Following that when we go to to the AI space, if you talk to me about a year ago, I would say large scam organizations probably don’t really need to use AI because it was a little too cumbersome and they have the people that they either pay or abduct.

Assaf: It’s a whole thing, but we are seeing. Less sophisticated actors and more sophisticated actors. And that’s specifically where I’m, I’ve been seeing this for a while, starting to use ai, starting to use image generation. Voice generation. And actually one of the things that I’ve been calling out for a while, it’s, it’s actually, we are really worried about the, the bleeding edge of, oh, it can make [00:06:00] amazing voices.

Assaf: It can make ama amazing, amazing videos, which it can, but one of the things that is. Really metastasized is text generation, creating, creating articles, creating texts, creating fraudulent websites at scale. That has been a huge force multiplier for, for scammers, because I can go into a rant on this, but what’s scammers want to achieve is create, and this is not only scammers, this is also info operations.

Assaf: They want to create this for their victim. They wanna create this universe in which. If they control the narrative, and it’s much easier to control the narrative when you can create SEO, that you can create websites that will manipulate SEO and you can create them at scale and you can make articles to by experts that are supporting what you’re saying.

Assaf: And at some point, the scale creates it, that the victims lose the concept of re, of what’s real and what’s not.

Luke: This is super interesting. There’s a bunch of [00:07:00] a bunch of things I kind of want to pull at from from that last reply. I mean, like I suspect our audience maybe never thought about. One of the things you mentioned in the way that you mentioned it earlier, around scammers now working from home.

Luke: Can we

Assaf: unpack that a little? It’s kind of a different flavor of, of working from home. And one of the people that I’ve really learned a lot about this from is an assistant district attorney here in Santa Clara, Erin West. She’s been doing a lot of this research and I’ve, I’ve been doing a lot of the research into these scams called pig butchering scams, and I’m, I assume a lot of people heard about those, but the origination of those is that before 2020 you had organized crime like the triads doing.

Assaf: Classic organized crime stuff, create casinos and do their, do human trafficking through there. 2020 came and they have these complexes and casinos that are empty now. And they can’t make any money. So they were thinking, I assume, how do we make money? Okay. Scams [00:08:00] work. And they shifted those complexes into basically prisons.

Assaf: They leveraged their human trafficking pipeline to abduct people by running other scams, running, job scams, saying, oh, oh, you’re in. Bangladesh does an amazing job in Vietnam. You should come and then they take these people and force them to scam people in the entire world. The, I think it started mostly in China.

Assaf: In Taiwan, and then it, it moved. Very fast into the US and what happens there is that they sit on multiple phones and if they don’t do their job well, they get prodded with cattle prods or beat and they scam people and, and usually it’s investment scams, uh, investment in gold and crypto and things like that.

Assaf: It’s a fairly complex type of scam that’s. Both romance and friendship. It’s kind of on the bo, on the cusp between friendship and and romance, and they convince people to [00:09:00] invest in these fraudulent platforms.

Luke: Wow, that’s wild. It’s such a target rich environment. I feel like people like retired age are, are hit all the time by various scams and, and, and now you’ve got these areas where it’s basically like these factors.

Luke: It’s crazy. You, you mentioned too that like companies haven’t been doing enough or people haven’t been doing enough and you were gonna go on a rant about it. Can I encourage you to go a rant on, on a rant about it?

Assaf: Sure. So actually last time I talked about this, I think I annoyed some people in the community that likes to blame the large companies for scams, metastasizing.

Assaf: And of course, large companies and companies have of course a hand in this, but they’re in the end. A tool in the hands of the scammer and, and the issue, I don’t like to say like, of course there are companies that are not doing enough, or there are companies that are very, very happy with the status [00:10:00] quo and do not want to innovate because it’s too scary right now to innovate because if you innovate, maybe you’ll fail and innovate and you’ll lose your job because there’s lay.

Luke: Right.

Assaf: So not all companies are like that, but the thing that a lot of people fail to understand and it’s totally acceptable because it took me years to understand of working in places like this is scale. A lot of times I’ve seen people, a lot of people on LinkedIn that rail against the companies and it’s okay to call out the companies and say, you’re not doing enough.

Assaf: ‘cause a lot of the times. They are not. The issue that I’m seeing is that there’s a lot of railing against the companies, but not really, rallying cry around, let’s get these people right. And these companies, they’re not law enforcement. They can block, block, block, but they cannot get these people. And a lot of the times, what’s the, the gap is that law enforcement is limited in what they can do and limited in their knowledge.

Assaf: And they’re building it, but they’re still not there. So the issue around scale is this. From the [00:11:00] outside, you see an issue. You see you were scammed or somebody was scammed, and you postulate about, okay, if this was changed on the platform, that would never happen again. And it makes total sense to you and anybody you talked.

Assaf: But when you look at it in the scale of these companies, in the scale of their data. That doesn’t work. One, from a policy perspective, and two, just from a, if I change one thing, there’s so many things down the line that are going to change because of it and create things like false positives. And you might think, well, I don’t care.

Assaf: So other people get their experience eroded, but then that hurts the company. That hurts the user. We need to remember. And it’s not always fun to remember that the companies are here to make money. I’ve seen over the years, people rail against the people who do the work at Meta specifically. That really bothers me because the people [00:12:00] who do the work, and I’m not talking about the CEO, I might not be even talking about the VPs, the people who do the work.

Assaf: Work hard. Mm-hmm. They care. They care a lot and they try to do everything in their power. And companies will be companies, sometimes it’s a headcount issue, sometimes it’s a scale issue and it is very hard working at these companies and continuously being hearing, oh, you’re not doing anything.

Assaf: You’re not doing enough. That’s all I’ve been doing. You just don’t see it in your little bubble because your bubble is a part of a 2 billion person community. And I have seen things on on LinkedIn as, oh. I think it was about extortion, specifically about teens extortion. Well, if I was at Facebook, I would change these things immediately.

Assaf: Within one day, I really sat down and looked at those, those five things. I was like, no, you can’t do that. That will break all of this. This won’t work. And while the companies could do a lot [00:13:00] more. It’s not that the teams themselves could do a lot more, the, the teams need more support, more help, more resources, and also sometimes the companies are hamstrung by privacy, by policy, you just, there’s things that you just can’t do.

Assaf: On the other hand, I do agree that companies. Can and should focus more specifically on things that we’re seeing, like pig butchering scams that are ver, again, very complex because they don’t happen on one platform. They happen on multiple platforms at the same time, on extortion of young adults and kids, I think platforms can do a lot more.

Assaf: Should do a lot more though I think they are doing currently, the people working in these companies are doing everything that they can with what they have.

Luke: Mm-hmm. I think you bring up a really good point about the scale too, because it’s one of those things where the unintended consequences that can come from false positives can end up ultimately causing [00:14:00] you to break other laws too.

Luke: It’s, it’s a very complicated issue. Love to get your take on this too. It feels like a reflex. For when these things happen is to, to go to the root platform where the thing may have occurred. Do you feel that there’s a gap missing between, on the enforcement side, like on on the tech? I know it’s kind of complicated and you’re dealing with jurisdictions and stuff, but what’s your take on on that end of it, like going after these guys?

Luke: Is there, is it a technical problem? Is it a legal one? Is there just not the right tool yet?

Assaf: Usually it’s a jurisdictional problem. For instance, if I find an actor doing something and they’re really breaking the, the rules of whatever platform I’m on and they’re in Vietnam, there’s not much I can do. I, I’ve gone into issues that I’ve had people in other countries and the answer from law enforcement there is if they’re not doing anything against our people, we’re not gonna do anything.

Luke: Mm-hmm.

Assaf: So this kind of takes me to [00:15:00] another thought about. A lot of times people in especially leadership and trust and safety get really excited about, oh, let’s get the FBI involved in, in law enforcement, and we’ll go after them and we’ll send Cnds or, or we’ll sue them all. That’s great. There’s two issues here.

Assaf: One attribution on any platform of who this person is exactly. Is time consuming. If this is a good actor, it will take an investigator a while to get there, to find exactly who this person is and exactly where they are. And it also depends on the type of data that this company stores. The other issue is what I said, that the jurisdiction, they’re just gonna take the system and deist and put it on their wall as a trophy sometimes.

Assaf: Sometimes they won’t. Sometimes they’ll get scared. That happens. And this brings me back to the whole idea of, threat mitigation. Yes, it’s exciting to go after people. I love doing attribution, but if we find a problem or an abuse vector, the [00:16:00] way I think about it is I don’t want to fix this abuse vector specifically right now, but I do wanna fix it now.

Assaf: But I don’t wanna fix this pinpoint issue. I want to generalize this issue and work with engineering to create rules. Models and if necessary, do some tweaks in the product. So that specific issue can never happen again.

Luke: Mm-hmm.

Assaf: And this takes me back to something that a leader in another company said to me in the past.

Assaf: They were really surprised when they ran this enforcement sweep on an abuse type. And the attacker immediately came back and they said, wasn’t this solved? So this is a conceptual problem that I’ve noticed in some places. You’re not solving this, you’re not solving, the police is not solving crime. You are making it harder for them to do it more expensive and more costly.

Assaf: So [00:17:00] maybe what you did will take them off the platform. Maybe if you did it in a larger scale, it will take them off to completely try a different vector. They’ll be back. They’ll try other things. They’re not gonna stop. This is how they eat. This is how they feed their families. Mm-hmm. That’s a big issue that I see in some companies of let’s two things of one, let’s fix this like one issue right now.

Assaf: ‘cause it kind of alleviates my stress or the escalation. Mm-hmm. And not really understanding that these thread actors are not kids that are just doing this for fun. They’re gonna be.

Luke: Very sophisticated too, in a lot of cases. Touching on the attribution piece. This might be controversial, so feel free to stop me on this if, if you want.

Luke: But as we’re seeing right now, like in the past couple weeks, these new age verification requirements, you know, and, and, and framed in online safety and, uh, I think it’s Australia and the UK and even some states in the US privacy, people will say this is much [00:18:00] more about control than, than safety. And then the people that are pushing it are saying it’s all about safety.

Luke: What’s your take on how effective these things might be?

Assaf: I have a couple thoughts about that. So one, this is, I wouldn’t say purely, I tend to say things sometimes, but in a very big manner. This is a regulation thing. The country that cares about, and by the way, I know Australia does a great job. They really care and really push platforms, and there’s other countries that do that.

Assaf: Brazil. They have their understanding of what’s going on, and they’re saying, listen, from our perspective, you shouldn’t have people, kids on your platform, and that’s it. Just do age verification. What age verification does is ad friction on a good way and a bad way. If we focus on, on, on the good way, yes.

Assaf: If I’m a child and I want get on the platform and I’m like, no, okay. Can’t do it. I don’t have any access to anything else and I can’t do it. The flip side of that is that. [00:19:00] It’s friction, it’s not solution. It what it does, it will probably reduce the set of children being on the platform. But people lie.

Assaf: You can get a fake ID very easily if it needs a credit card. You can get a fake credit card very easily. If you are persistent, you can very easily pass these things, unless I’m wrong and they’re doing like heuristic identification of how this person acts in their life day-to-day life, which I would be surprised in the end.

Assaf: It really feels to me like that. Like a check the box activity. Check the box activity. Doesn’t mean that it’s useless or bad, it does something. But what happens is the same thing that I was saying before that happened with this executive. Oh, so we solved it? No, you solved currently regulators getting on your back.

Assaf: So you showed them, Hey, I’m doing this. Is it solving the problem? No, let’s say it’s reducing the problem by 40%, but you still have it on the platform, [00:20:00] and if you don’t do anything more robustly, it doesn’t really matter. It’s not, you’re not really like, yeah, you’re affecting people, which is great. You’re reducing children off the platform.

Assaf: That’s awesome, but you’re probably not reducing the victim space by that much, but that’s not the main problem. The main problem is then when companies do this. They go out there and tell everybody that they did a thing.

Luke: Hmm.

Assaf: Without caring or understand more, it’s more understanding. It’s understanding that it didn’t really solve anything, but what it creates, it creates this air of like, oh, they solved that.

Assaf: Everybody’s safe now, so I, I need to care about this less. And it does exactly that. You care about it less. The regulators get off their back and they continue doing their thing. And again, this is not a malicious thing. This is just how people work and keep the status quo, especially people who are afraid to get laid off because they tried something that didn’t [00:21:00] work.

Assaf: So yeah, it was a little bit of a long-winded answer to what I think about that specific like age verification or any of these like magic policy fixes. In the end, if anything is. If you do anything that’s not holistic and un and understanding what you’re looking at, you’re not fixing anything.

Luke: I think it’s super helpful.

Luke: That’s kind of one of the reasons why I was excited to talk to you too, because everybody kind of shops these like PR wins, you know, or almost like politically motivated things in one form or another. But I feel like people that aren’t directly working with these things sometimes don’t realize that they’re not like a magic bullet where even if you do have everybody putting their ID in.

Luke: I remember some of these KYC systems, you could put a picture of a coffee mug and it would let you Yeah. You know, it would let you kind of get through. So I, I think it’s really great perspective. Shifting gears a little bit, you’ve done some red teaming, right? You might, sharing a little bit about what red teaming is and, and where it’s effective and where it kind of

Assaf: isn’t.

Assaf: To be totally frank about red teaming. I started looking into red [00:22:00] teaming and organizing red teaming where I’m now, I’m not a red teamer by any means, but I have now a much better understanding. So, and also I’m drawing from a lot of people I’ve spoken with in the last several months that are experts that are in all kinds of companies.

Assaf: How we should be looking at red teaming is looking at it from an adversary perspective and really trying to. Use all the tools in our disposal to circumvent safety features in the company that we’re trying to get after. So looking at, at some, some red teaming. The thing that I’ve not seen yet robustly at least, is a red team exercise or a test that.

Assaf: Actually creates the attack from an infrastructure that looks like the attacker’s infrastructure. Really understand, okay, what’s this attack you’re looking for? What are they going to try to do? What I usually see is automated stuff, which is great, for [00:23:00] example, for ai, just like prompting a bunch of things that shouldn’t happen, which is great, or trying to break the model.

Assaf: But what I think is the most effective for companies for RED teamers to do is. Really break down the process from beginning to end of how an, what an attacker looks like on your platform.

Luke: Mm-hmm. And

Assaf: what are they doing? Because that’s for safety. It’s a much more straightforward way to create rules around it.

Assaf: Mm-hmm. And create a robust heuristics, because basically everything I ever talk about is the attack that happens is not just. The manifestation at the end. Mm-hmm. It’s not just someone created a prompt or someone scammed someone. It’s everything before, how did they get here? What did they do to get here?

Assaf: How did they build their reputation? So all of that, It’s a lot harder, but I find that to be a lot more helpful when I get information from our [00:24:00] teamers.

Luke: Yeah. That’s awesome. It is super useful too. All the truth is kind of in what you were saying originally too, around like money’s a motivator here, right?

Luke: If they’re gonna exploit something and then try to capitalize on it, if you’re not looking at the whole 360 view of it in depth, then you’re just gonna deal with the symptom versus the cause or, or making it so expensive that it’s not worth it anymore or whatever.

Assaf: Yeah, and whenever I, I work with new investigators or I even investigate my, when I investigate myself.

Assaf: I always ask myself, when I look at any activity, why, mm-hmm. Why are they doing this? Mm-hmm. What’s the gain? And sometimes I’ll see something and I’ll go to other people, I don’t understand they’re doing all of this, why? Mm-hmm. And sometimes they’ll know and they’ll say, oh, it’s this type of abuse that goes this way and then ends here.

Assaf: If I don’t have the why, it’s very difficult for me to, to explore what’s happening because I don’t know which direction to go to actually holistically look at the issue.

Luke: Mm-hmm. On a bit of a different note too, but, but related [00:25:00] to all this, you hear the word accountability thrown around a lot with regards to safety.

Luke: I mean, it’s, it’s gotta be hand in glove, right? Where there’s just obvious cases of either neglect or, or whatever that’s causing harm From your point of view, what’s effective with accountability other than people just talking about accountability?

Assaf: Yeah, I agree on that. What I’ve seen recently that’s been helping.

Assaf: Especially in the scam space is regulation.

Assaf: The more regulation there is, the more pressure there is on the companies, and then the companies put more pressure on the teams under them.

Luke: Mm-hmm.

Assaf: To fix things. Regulators have a limited perspective and limited view, which is by design. And what I’ve noticed that’s happening is in some companies what happens is that it becomes this.

Assaf: How can we tell a story that satisfies what the [00:26:00] regulators want to hear?

Luke: Hmm.

Assaf: Doesn’t necessarily solve anything. Mm-hmm. Or maybe just, it looks like a win, but it doesn’t really solve anything. And it’s just like, how do we satisfy them, look really good and get them off our back for a little bit? And some companies.

Assaf: Have really, really, really good pr. I will say that throughout my time at Meta, they don’t have good PR as we know, and that has forced them to do a lot of the work. Mm-hmm. Like that, like we always lamented about PR and like really? He said that. Okay. And that forced safety teams to do a lot of the work and that’s why Meta is a leader in that space.

Assaf: Mm-hmm. Other companies have very, very, very good PR that is able to keep them under the radar. And it also helps that meta takes all the fire because they have horrible pr. I would say that if meta disappears tomorrow and there’s no meta anymore, several of the very large companies are gonna be in a lot of trouble.

Assaf: Mm-hmm. Because all the fire is gonna go to them. Going back to the, the accountability. So I think regulation is [00:27:00] very helpful and accountability there, but I think yes, we need to keep tech companies accountable to what’s happening on their platform. To a degree, tech companies are not the police, and it took a while to kind of realize that that.

Assaf: And I used to not agree with that, that tech companies should not be the arbiter of what’s true. Right. They just shouldn’t. This is not, and I think that got bastardized around 2020 to 2022, especially in the more large, the larger integrity team. And we were all fighting for what is right with COVID and things like that.

Assaf: I don’t believe that was the right thing to do. And CR and it, it created a lot of other issues and I did believe it in at that time. What I’m trying to say, it’s just, it’s. It’s very difficult to find how to keep these companies accountable to what’s going on because they are bound by so many things and they’re bound by their need to make money, and then they’re bound by politics.

Assaf: Now that’s like a compounding issue, but. [00:28:00] I really feel that in order to be able to hold companies more accountable, we need to hold ourselves more accountable to what I’m not saying here that I wanna be very clear on. I’m not talking about victims needing to be more accountable. Right. Victims. Right, right, right.

Assaf: And what happens to them is not their fault, but we need to keep the discourse more accountable to, okay. What are we not doing right? Yes. The company missed this and yes, they should be better and we should keep them accountable, but we’re kind of at a point of like, yeah, law enforcement’s not gonna do anything about this.

Assaf: Nah. Regulations, they can do all they can government. They don’t really care. I feel like because we got to a point that we’re like, eh, and it’s much easier to rail against these companies to say, well, you’re making money off of this. We kind of let. Government and law enforcement kind of skate without, I’m not saying they’re not doing their job.

Assaf: They are, but we’re asking a lot less accountability from them because I see [00:29:00] a lot of discourse about. We need accountability from tech platforms, which we do, but we also need accountability from politicians. And this comes with having people in the government and in law enforcement that know what they’re talking about with regards to tech.

Assaf: There are those people, there are not a lot of them, but then you get these congressional hearings that are a joke because this person does not know what they’re talking about. And that doesn’t only make that person look bad, it hurts. Helping people ‘cause you are asking questions that are inherently wrong or you just don’t understand what you’re asking.

Assaf: So while, yes, we need to keep companies accountable, we need to understand what we want to ask of companies. ‘cause right now we’re asking everything. Save free speech, make sure nobody gets offended, which sometimes like a lot of the times, yeah, make sure there’s no discrimination or racism again. Great. Go after these people.

Assaf: [00:30:00] Make sure that they can never open an account, never let anything get through and make sure that they all get arrested. There’s only so much these companies legitimately have the power to do.

Luke: And it kind of makes me wonder, and I’m gonna kind of apologize in advance if I butcher how I frame this. It’s just one of those things where.

Luke: I mean, one of the things I started noticing when I started working in tech that these developer communities and culture too is like pretty tight knit in a lot of ways. Do you think that there’s more, even though people might not have that mental model, that don’t work in tech and realize that, you know, hey, a lot of these developers and engineers, et cetera, talk to each other a lot, even though they might be working at competing companies, right?

Luke: Like, is there more that could be done on the cultural side of that in the developer communities around accountability?

Assaf: So it’s really funny that you asked that because I can’t speak as much to the developer community, but I can speak to the trust and safety community and there are many of these [00:31:00] efforts from people who care at multiple companies that try to put everything together.

Assaf: And usually, well, not usually all the time, they hit a wall because there’s legal implications and there’s privacy implications. If all these tech companies shared everything all the time. It would be a lot harder for these actors to do anything because I would know where things were used at other platforms, and they would give me information.

Assaf: You cannot do that. That’s competition laws, privacy laws, illusion. You just can’t do it. And I, I find myself at these conferences or these meetings where we really try to do things holistically across companies. We can’t. I’m not gonna dismiss the efforts. There’s a lot of things that can be done and are being done, but as a whole, it’s extremely difficult.

Assaf: Everything needs to be, you just don’t wanna get sued and you’re going to get sued.

Luke: Right. No, it’s a super helpful context. I think a lot of folks don’t realize, hey, these [00:32:00] conversations even happen, right? Like where it’s just like if you’re in the space, you know, but if you’re not, you don’t. Right. I

Assaf: can talk to people at other companies.

Assaf: But once I start asking them for specific indicators, that’s, I can’t do that because that’s sharing their information that didn’t go through legal. Talking to other people is fine and talking about. Tactics and, types of adversaries, that’s fine. But when we go down to sharing indicators, like even IP addresses, email addresses, there are ways to do that, but they’re extremely cumbersome and hard to set up, but they’re possible, but they’re definitely not perfect and don’t work for everybody.

Luke: Awesome. No, no, this is super helpful. Is there anything we didn’t cover that you might want people to know about as we wrap it up?

Assaf: I think one of the things that I’ve learned recently before I joined 11 Labs is there’s a perception that the large companies are doing everything [00:33:00] that they can, a, a lot of things that they can, and they just, just can basically do a lot of things like, oh, it’s not possible that this company can’t find information X because they have all the information and all the tools.

Assaf: And what I learned, especially after the layoffs started, is that large companies, or even medium and smaller companies that have layoffs, especially in the, in the trust and safety community and and space, you can see their innovation in the space declining because. People don’t want to innovate. And if they do try to innovate, they get stifled by people above them.

Assaf: And again, this is more of the larger companies that don’t wanna rock the boat. Because if I don’t rock the boat and I build my little kingdom and I show that I’m I, my kingdom is necessary for the work, then I’m not gonna be on the chopping block next. And this is something that I’m [00:34:00] definitely not seeing where I am, but what’s happening is just like the freedom of innovation in the space is being taken away and that harms everybody.

Luke: Mm-hmm. No, that’s super, super useful and, and good to, good to know. If people wanna follow more about your work or anything you’re putting out there, um, where would you recommend they go? I’ve been a little less active ‘cause

Assaf: I, I got really busy on LinkedIn, on under my name, Asaf. I have a substack that I need to get back to, but usually LinkedIn is the place that I’m most active on.

Luke: That’s awesome. Well, ASFA, I really appreciate you taking the time out and great conversation. Love to have you back someday too, to check it back in on some of these things, if you’re open to it.

Assaf: Yeah. Thanks so much, Luke. I really appreciate it. And it was, it was really fun. It’s fun to rant sometimes.

Luke: I love it.

Luke: I love it. Thanks, eh, we’ll talk soon. All

Assaf: right, thanks.

Luke: Thanks for listening to the Brave Technologist Podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t [00:35:00] already made the switch to the Brave Browser, you can download it for free today@brave.com and start using Brave Search, which enables you to search the web privately.

Luke: Brave also shields you from the ads trackers and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • New tactics and scams threat actors are using, and the effectiveness of measures like age verification and red teaming
  • Limitations faced by tech companies in combating online safety issues, and the challenges of maintaining online safety at scale
  • The role of law enforcement and regulation in pressuring companies, platforms, and teams to improve online safety

Guest List

The amazing cast and crew:

  • Assaf Kipnis - AI safety (intel and investigation) at ElevenLabs

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.