Accountability Requires Identity: AI Agents & the Future of Digital Trust
Luke: You’re listening to a new episode of The Brave Technologist, and this one features Evan McMullen, who’s the CEO and Co-founder of Billions Network. The first universal human and AI network, billions is pioneering a mobile first identity layer that verifies both humans and AI agents proving uniqueness, K-Y-C-A-M-L, status, location, age, or even machine reputation while preserving privacy.
Evan’s previous work includes early blockchain development for retail and enterprise. She’s contributed as a leader to data standards and on chain identity, and has previously served as a CEO and founder of a verifiable data platform, disco X, Y, Z. In this episode, we discussed how Billions is here to save the internet by starting first with scaling trust, the biggest pain points in verifying real users and agents today, and how some of their biggest partners like HSBC and TikTok are using their tools.
Risk for Humanity if we don’t solve decentralized identity before AI and agents become ubiquitous and all sorts of [00:01:00] interesting areas around how privacy and identity can flourish in this AI and crypto environment. And now for this week’s episode of the Brave Technologist.
Evan, welcome to the Brave Technologist. How are you doing today?
Evin: Hello. Hello. It’s another beautiful day online. Thank you so much for having me.
Luke: Of course, of I’m really excited for this conversation. especially now ‘cause we’re kind of entering this like world where humans and a EI agents are coexisting online.
From your point of view, kind of how do you define identity in that context and why does it matter more now than ever?
Evin: So at Billion’s Network, we are the builders of the first universal human and AI network built with mobile first verification. So what that means is that you can prove who you are from your own device without revealing any of your private data, such as proving that you’re a human being and not a bot.
In the age of ai, this is especially important where more than half of online [00:02:00] transactions and interactions come from unidentified unaccountable machines, not human beings. And so if we are to embrace things like digital payments or AI agents that are able to assist us in doing everyday tasks, then we need to know who we are interacting with.
Who we are paying and who we’re trusting with our data. And so that’s why it’s really important for us to be able to extend the ability to prove your identity and not only to all human users of the internet, but also to the bots and agents who serve them.
Luke: That’s awesome. Yeah. Well, what kind of inspired you to, to build Billion’s Network and what problem are you most urgently kind of focusing on?
Evin: So, our team is here to save the internet in the age of ai. If we are to be able to scale trust in things like content, being able to discern what is real and what’s been digitally manipulated if we want to be able to scale purchases in an ongoing way. Whether you know, access to actual physical [00:03:00] services and objects or whether that’s, you know, subscriptions or even things like newsletters and social platforms.
We want to be able to trust in where our value is going and what information we’re interacting with. And so at Billions Network we see this as the missing layer of the in. Originally the internet was built for computer to computer interaction and identity, but did not incorporate a way for human beings to be able to identify and discern themselves from machines.
And so now enabled with new frontier technologies like blockchains in combination with ai, we have both a massive opportunity, but also a massive risk that we need to address together.
Luke: Yeah. No, and that’s, that. That’s great. And kind of leads me to the next question, ‘cause you touched on this a little bit earlier, and, and I think it’s important too because a lot of people, you know, feel torn kind of between convenience and privacy online and really curious to get your take on how billions designs for both of those.
And I think you were touching on it a little bit earlier where it was around like kind of proving using things to prove [00:04:00] that who, who you are, that you’re human. But yeah, we’re even seeing, I mean, all sorts of, Legislation and, and, more and more of the internet kind of getting policed at the national level.
So really curious to kind of get your take on how you, you both are or how Billings is balancing both of those things and and what your take is on that.
Evin: So thanks to the incredible development of novel technologies such as Zero Knowledge Proofs, which allow us to learn about data without actually seeing the data itself.
We don’t necessarily have to sacrifice privacy for safety in our digital spaces. There actually have been longstanding laws that govern, for example, what information children are allowed to access online, such as the Child Online Privacy Protection Act here in the United States. But that becomes increasingly challenging when, as you note, our digital world extends across state and national boundaries.
And so, in, you know, the advent of of more and more rigorous age-based access control laws that govern access to content or things like KYC laws that [00:05:00] govern access to payments or financial assets. It’s especially important that we be able to you know, impose the appropriate compliance criteria on our digital experiences while allowing them to be o as open and accessible and safe as possible.
So what’s really cool about tools like Zero Knowledge Proofs is that you can, for example, determine that a user is over the age of 18 without revealing their. Actual date of birth to an application or website. And so we’ve worked closely with partners like the European Commission to help introduce these technologies in a sort of government approved and safe way to show small businesses and developers that there can be more efficient, cost effective, and secure ways to comply with laws and keep their users safe while managing risk.
Luke: Yeah. And that’s a great point too. ‘cause I think, you know, that’s part that spooks a lot of people is, I don’t want to have to, run an ID check everywhere that I go and, and have to, you know, hand over. it is really personal information, right? You’ve got your, home address on your id, a whole bunch of other pieces of information that [00:06:00] if in the wrong hands, people, you get weird knocks on your door or worse, right?
And so, have being able to prove that just really kind of curious like. your mind when you think about how this gets implemented at scale, is it something like where maybe a government agency is using Billions Network or, or some type of technology like this and then generates a proof and then that same proof can get used everywhere else?
Or, or kind of, maybe you can help unpack that a little bit for just at least when you’re looking at, you know, what the, what the steady state of this could be. Like, is that kind of how it works? Or, or maybe shed a little light on that.
Evin: Absolutely on the right track. So with Billions Network, we build using open standards that were developed by the Worldwide Web consortium, the good people who brought us the internet that we have today.
And so what that means is that when we enable data to be created through, you know, our application or through our SDK inside of other websites or applications, that data is already interoperable. Meaning that it is ready to interact with. Both existing [00:07:00] legacy systems such as government ID systems, as well as blockchain based payment rails, such as stable coin payments that could be used for AI agents.
And so what’s really cool about these open standards is that they are tamperproof, meaning that this data cannot be modified once it’s signed by the government. So for example the data in your passport is signed by the issuing government entity that that issued, that that past. Report to you, and as long as the signature on that data remains intact, then you can use it to generate zero knowledge proofs from your very own device without having to actually ask permission from the government to do so.
In the same way that you can pull your driver’s license out of your wallet in physical space anytime you want to, and you don’t have to ask for permission to be able to do that. What is especially also interesting here is that we use what, what’s called client side proving, meaning that there’s no centralized server.
There’s no third party that you have to rely on to be able to prove that you, you know, have information in your own documents, such as proving that [00:08:00] you have a passport or proving that you’re over a certain age based on that government document.
Luke: That’s another great point too, that client side implementation is like super important you know, piece of that.
And yeah, and I think another area too, you know, we’re hearing a lot about proof of humanity and, and also things machine reputation and kind of, I think this kind of touches on what you were. Opening with around, you know, enabling agents to know who you are in, in those types of, or to prove that that’s who you are to them.
How do these concepts actually work in practice and, and what do you see them unlocking for users and companies?
Evin: So from a very basic perspective, accountability requires identity. And so we cannot hold agents accountable for their actions, nor can we entrust in the accuracy, for example, of our interactions with them or payments to them until we can call them by a given name until we can identify them as, different, discern them from other agents.
And so, [00:09:00] the first. Step for our team was to develop what we call deep trust or a framework that makes it easy to assign a unique identifier or identity to an instance of an AI agent. So now we can implement a name that’s associated with that agent that’s unique and specific to that agent that can then easily be related back to or held accountable to an individual or an organization such as a business.
So this is sort of an early instance of being able to prove that an agent is related to, or a representative of. Specific organization or individual, and that’s when we can get more fun with it. We can allow that agent, for example, to also receive zero knowledge proofs that prove that it is acting on behalf of an individual who’s a citizen of the United States over the age of 21 or has a certain KYC status.
And so this allows us to enable more sophisticated workflows or types of interactions, setting rules between agents about the type of data they can share with what other types of agents or who those agents are allowed to represent in those [00:10:00] interactions. And so without this ability to assign identity.
Facilitate accountability and to add more types of data. We are in a very, very low trust environment in these agentic workflows, and thus we’re very, very limited in terms of what we’re able to do. Basically just over collateralized lending and treasury allocation, which are very limited and sort of low trust activities.
Luke: No, it’s interesting too, and I’m kind of wondering too we’re talking about agents a lot, but can these also be used in like certain environments or contexts? And, and let me put a little color to that, right? Like right now we’re thinking about it is part of kind of what our genetic. Strategy is, at brave where, as opposed to letting the agents kind of run amuck across everything you’re browsing having almost like containerized or profile based agents or environments where you can have like one treat, one like a lawyer’s office or one like a, a, a.
Doctor’s office or one like where in these environments where you don’t want it them to necessarily cross over or leak, but like could you use a solution like this kind of with an agent in that type of [00:11:00] environment to then go do follow on tasks or something like that?
Evin: Certainly. I actually think that what you’ve described as an approach is uniquely well suited to how we’re thinking about.
Where it’s important that the, you know, controller of an agent have the ability to discern which data is shared with that, that agent, in what context? And that doesn’t necessarily have to be the raw data itself. It can just be traits or capabilities. Furthermore, we think it’s important that agents be able to, you know, if they’re doing things such as financial transactions or extrapolating insights on things such as health.
It’s very important that users be able to understand the way in which those outcomes were achieved and also to be able to entrust in the outcomes or recommendations that they provide. And so for managing multi-agent workflows where you may have specialized agents that are interacting with one another, we also wanna make sure that they’re sharing data with each other appropriately, or that the you know, tasks or actions that they’re permitted to do are within the purview of how they’ve been [00:12:00] deployed.
Luke: It’s super interesting because yeah, like I think, you know, the, the, the picture you’re painting here seems like a much more like practical one. ‘Cause there’s all these things where, you know, in tech adoption sometimes will outpace like, things actually for example, I’ll just put it out there like, Stan Malman was talking about how people are using chat, GBT, like a therapist’s office or putting this really sensitive information in there.
And he is like, Hey, look, like if, we get, you know, if we have to. If we get subpoenaed or something, like we have to turn over records, right? and so what you start to hear from a lot of these folks that are having to kind of work backwards on it is like, well, what if we just like, make an, agent that’s like a lawyer, but, but in this case it’s, I could see lawyers actually having agents of their own.
Kind of are trained on those, bar accredited, parameters and all that stuff. So that, like this is an agent on behalf of my lawyer, where, you know, it comes with like the lawyer knows what the agent or it is coming from that set of that scope, right? Like, it seems like a much more practical way than kind of.
Getting rid of lawyers altogether, right? and, and doing that. but having, you [00:13:00] know, making sure that that’s actually coming from the right lawyer who’s briefed on the right thing. And, is vouched for, you know, in that kind of way. I don’t know, maybe I’m getting a little too out there on it, but just, just curious, does that kind of fit the rubric a little bit?
Evin: Oh, that certainly resonates and I think there are a few different ways to look at the adoption of agents into professional workflows today. I think the sort of most common is that of tooling to improve the efficiency of processes that are already part of everyday workplaces. Such as research coalition and the, you know, writing of footnotes for, for the legal field.
Mm-hmm. In fact, we’ve seen examples exactly as you’ve described for about the last decade, evolving in maturity. But of course always accountable to senior partners or you know, folks at that law firm who are going to stand by the information that’s. Been collated or you know, reviewed or even improved upon through these age agentic workflows.
I think it’s also very much worth noting that the question of intellectual property and provenance of data for training [00:14:00] of models, like, and, you know, resources like LLMs is one that is falling along different geographical jurisdictions in terms of how it’s. Being treated. So for example, in Japan, the utilization of of protected intellectual property and service of training models is cons, constitutes fair use.
Whereas in the United States, arriving at that outcome has definitely been more of a discourse in the courts. And, you know, really a case by case basis that we’re not. Seeing as kind of a, a universal assessment. And so in the interim, I think many businesses want to err on the side of caution protecting their trade secrets and the unique aspects of their business operations that they may not want to disclose in a, you know, a manner outside of their own choosing.
And so things such as localized models or rules-based engines that can compute on data without revealing it, I think are going to rise in popularity as we’ve already begun seeing happen.
Luke: Yeah, that makes sense. It makes sense. Especially too, like, even like on the crypto side, there’s a lot of [00:15:00] nuance there too with, you know, how things are gonna be deployed and where, and, national versus global things and et cetera, et cetera, I think.
But, but speaking of tools, I mean, your tools have been adopted, by companies like HSBC, TikTok and Scroll. What do they see as the biggest pain points in verifying real users and agents today?
Evin: So one constant challenge that many of our partners reach out to us with is is understanding how we can use the existing hardware that is in everyone’s homes and pockets in or in order to achieve these legally compliant privacy, preserving and scalable outcomes.
And so there are many unique different methods of being able to identify users that require special camera equipment or require proprietary hardware. So you may have seen, for example, in airports worldwide sometimes there are gates that will take a picture of your face and, you know, are sort of purpose built for that objective.
However, not everyone has those devices in their homes nor are they uniform across the [00:16:00] world. And so using standard issue hardware, laptops. Desktop computers, you know, mobile devices, smartphones is of paramount importance for many of our partners who serve global user bases, who already own these devices.
And so that’s where our ZK libraries or zero knowledge proving libraries that allow you to use this existing hardware. To generate these privacy, preserving proofs about data become really valuable because they’re very friendly, kind of easy to cut your teeth with. For example, libraries like Starcom, snark js and Rapid Snark are some of the, you know, most popular introductory libraries where students and universities around the world begin with, you know, this set of, kind of information and precedent in order to build privacy preserving applications.
Luke: Awesome. Yeah. and what lessons have you learned from bringing Web3 Identity Tech into these legacy and mainstream systems?
Evin: So, I think it is crucial for us to remember what might feel a little obvious when I say it, which is that everyone on earth has had an [00:17:00] identity since the day that they were born.
Mm-hmm. And so the relevance of what we’re talking about today is. Necessarily universal. We, you know, we never asked anyone for permission to have our own identities, but when we come to internet interfaces, we certainly do a lot of granting permission a lot of repeated actions, a lot of filling out forms and disclosing data without a clear understanding of the ramifications of what that means.
I do not think the answer. Is to burden, you know, citizens of the internet with a massive academic exercise to understand what is happening with all of their data. Rather, I think, you know, the answer is in Frontier Technologies that implement both improved business outcomes and risk management for the operators of content platforms, websites, and digital experiences, but also offer safer and friendlier experiences for citizens of the internet themselves.
And so leaning on our regulatory leaders. Leaning on our technology and research [00:18:00] leaders I think has has already shown, you know, leaps and bounds in terms of improvement over the past few years. and I think we’re gonna continue to see an upward trend.
Luke: How has it been working with these regulatory bodies on, this topic?
have they been receptive to it or understand? Do you have a good understanding of how this stuff works or has there been a lot of education required from, from your side?
Evin: You know, the sort of old adage like falling asleep or falling in love slowly, then suddenly do the regulatory environments embrace cryptographically based technologies for security.
But in, in all seriousness, though, it has been you know, a, a short few years since the concept of zero knowledge proofs or privacy, preserving verifiable data. Sounded like, you know, sci-fi in fact, mm-hmm. In 1997, professor Larry Lessig at the Berkman Klein Center for the Internet at Harvard was suggesting that you might be able to, you know, prove your age without showing the doorman at the bar, your exact birth date on your license.
And now we have [00:19:00] arrived at that outcome from a, you know, technical feasibility and market adoption you know, point of view. So now that we’re sort of. Sitting on this precipice it’s more of a question of switching costs and process for adoption than it is whether or not this is even possible.
Luke: That’s great. Yeah, and, I can attest to that too. I think, you know, we started using, you know, zero knowledge stuff like back in, you know, 20 16, 17. And I remember going in, this is before GDPR kind of went into effect too. And even then GDPR was kind of like an eye glazing exercise with a lot of the publishers and the community on the tech side and the business side.
But it was just all of a sudden. Like, something everybody had to start caring about. And I think you’re totally right to where you know, it, it is happening now. and speaking of, the, the sci-fi example from the late nineties to now, like, you looking out, into the future, right?
imagine, you know, you’re five years out from now, right? what does a user first kind of internet that look like once a human and AI identities are fully verifiable?
Evin: [00:20:00] So. I think, I’ll, I’ll paint the rosy picture in a moment, but I think the first, you know, serious reality of risk that we need to adopt is that, you know, organized civil swarms will grow in maturity and sophistication as do our methods of detecting them.
So I think we need to anticipate fleets of rogue agents flooding our, you know, our websites. And Defi protocols with you know, the intention to manipulate, whether it is the outcomes of search engines or LLM responses with known inputs of data or, you know, the, approach of yield and the decentralized finance ecosystem.
being deliberately manipulated with the intent to exploit. And we’ve already seen examples of this before you know, such as things like flash loans in 2025 and the defi. Base. But I think, ZK based identity for humans and agents can really address this, you know, this potential risk by proving agent and user uniqueness and allowing the attestation of, [00:21:00] users to, to assert that they are a single entity backed by a human principle without revealing data about, you know, their keys or those.
Specific users and significantly reducing fraud as a result. So on the Rosing side I think that in the next, call it five years 70 to 80% of all digital transactions will occur as a consequence of agents precipitating them as opposed to human beings click trading or transacting in a physical sort of manual sense.
Pushing buttons. And so what I think that means is more time and effort labor of search and discovery will be taken off the plates of human beings who will have more time to do whatever it is that they wanna do and not necessarily have to be wound up in the same type of digital administrator that they do today.
Luke: Sounds cool. I mean, like, sounds like a lot less room for, you know, error on pushing the wrong button putting the wrong number in or whatever. And the folks that have done a lot of crypto transacting have, are bound to make those kind of measure twice mistakes or lack of [00:22:00] measuring twice.
I guess, it is really interesting too, ‘cause I think. People don’t really understand too how broken a lot of the existing systems are and how bad the quality is on these things. So, having these systems, like you’re talking about where, you know, you can, you can prove the client side, it’s based off of, you know, established identifiers that are, are out there.
And, and also you’re not having to like risk putting your stuff into like kind of a, a junky system. I, I remember people were. KYC coffee cups and stuff and getting it through and, and, and it’s just like, what, what are we doing here? You know? So, so having this, you know, it sounds like you all are helping up raise the bar on, you know, provability and and putting more reliable and safer resources out there, which is super cool.
I mean, you know, what what does success look like for, for the Billions Network in the next 12 to 24 months from your own point of view?
Evin: Well, I can’t comment too much on the state of Coffee Cup identity. What I can say is that in the, for the next 12 to 18 months with the Billions Network, we are very [00:23:00] excited to unveil a number of collaborations, integrations with our partners in the government and enterprise.
Base helping to bring this technology to even greater scale around the world. It’s also been, you know, especially meaningful for us to bring value to you know, early stage blockchain companies working in the r and d space, bringing this frontier technology to market in innovative ways, while also supporting, you know, stalwarts of industry and some of the, organizations and structures that hundreds of millions and billions of people rely on every single day.
And so that I. Think is evidence of the openness of our network and the sort of general purpose capabilities of our tech. That they can, you know, support burgeoning hackathon projects with the same level of security required at a nation state level. And so, as you know, identity continues to be a critical national infrastructure.
We’re especially excited to bring that security, you know, to equal access around the world.
Luke: Awesome. and this has been enlightening [00:24:00] conversation, Evan, and I really appreciate you, you making the time. Is there anything we didn’t cover that you wanted to get out there today while we have you here?
Evin: I think one thing to note is that The opportunity space for agent identity, I think is larger, meaningfully larger than that of KYC or human identity alone. This concept of sort of machine to machine interaction and AI agents enabling you know, a, a friendlier and more accessible future of the internet is predicated on our ability to identify and verify them.
So if you take anything away from this. Conversation. Please remember that accountability requires identity and identity for agents is going to unlock the next era of the internet.
Luke: Wow, that’s awesome. What a great, what a great note to end on, I think. Where can people follow you and follow your work that, that, that are interested in doing that?
Evin: Our team welcomes new explorers and users, whether developers or individuals interested in this technology. can always find our [00:25:00] site@billions.network. We’re also very active on X at billions underscore n twk, and you can find me on X at proven authority as well.
Luke: Love it. Well, thank you Evan.
This has been an enlightening discussion. Really appreciate you making the time. Love to have you come back to, you know, and see how things are progressing along. And yeah, thanks again for, for making the time to come on today.
Evin: Well, thank you all for taking time out of your busy schedules to chat agents and the future of identity with me.
I’ll definitely take you up on that kind invitation and otherwise, hope you guys have a great day and I’ll see you out on the internet.
Luke: Alright, take it easy. Thank you very much. Thanks for listening to the Brave Technologist Podcast. To never miss an episode, make sure you hit follow in your podcast app.
If you haven’t already made the switch to the Brave Browser, you can download it for free today@brave.com and start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads trackers and other creepy stuff following you across the web.

