AI That Acts: How Autonomous Agents Are Rewriting Org Charts
Luke: You’re listening to a new episode of The Brave Technologist, and this one features Manoj Sina, who is the founder, CEO, and Chairman of Trust Wise, an AI safety company focused on securing and controlling genic AI systems, known as a father of IBM Watson and a pioneer of trusted ai. He also founded the Responsible AI Institute, a nonprofit advancing, ethical, and trustworthy AI has previously founded two venture-backed software companies that were acquired and generate over 1.8 billion in exit value.
He lectures on ethical AI at the University of Texas at Austin, contributes to Cambridge University programs on responsible leadership and holds 34 AI software patents. In this episode, we discussed AI is a second workforce, how org charts are about to change when the inclusion of digital agents acting on our behalf.
His vision for guardian agents, managing worker agents, having a LinkedIn just for agents and for having AI be the solution to the problems being created by ai. [00:01:00] Why he thinks the human lifespan is about to extend 10 to 15 years, and what a healthy partnership between people and AI needs to look like.
And now for this week’s episode of The Brave Technologist
Manoj, welcome to the Brave Technologist. How are you doing today? Very well. Thanks for having me, Luke. Thanks for coming on. I’m really excited for this episode and this discussion. And I’ve just kind of like, looking at, the big picture here, you’ve had a front row seat to some of the most important AI developments in, in the last decade where we’re at now.
What kind of excites you most about where AI is headed today?
Manoj: You know, it’s, it’s truly phenomenal in terms of what has happened in the last 10 years. We’ve gone from AI that could answer questions or play games with deep minds to now AI that can talk to now AI that is gonna start acting.
And that changes everything, this whole notion. Of a second workforce where we are going to now have AI systems that cannot [00:02:00] just read and write output like videos and text and stuff, but now they can actually start working on things autonomously for hours. Right now, AI can work for seven hours continuously and nonstop.
The projection is of next year. It’ll be working nonstop for. Full 40 hours, like a full work week. Wow. So that changes everything in terms of where I think this autonomous AI systems can, you know, start being put to work and the kinds of wonderful innovations that it is gonna bring about. So that’s what excites me more.
An AI that stops just talking, but in addition to talking starts now acting and helping with the useful stuff.
Luke: That’s awesome. Yeah. And, you, you mentioned it just a second ago, and I, I’d love to kind of, expand on this a little bit. This whole idea of AI is kind of a second workforce and kind of sound like an abstract thing.
You know, how, how would you describe it to someone outside the tech world? Because I feel like there’s a lot of, discussion going on around like impacting the workforce and, [00:03:00] and something like this, but this concept is really interesting.
Manoj: Yeah. You know, when you think about it the organization chart hasn’t changed since 1917.
Apparently it was 1917. That was first IBM published the first org chart. And if you look at it, it’ll look like today’s org chart. And nothing has changed in terms of humans and how and what we do at work. We do the same things as AI and agents do. We plan, we reflect, we decide, we act, we collaborate, and we have memories and expertise.
All of these things are now being imbued into AI systems. So one can now and, and it can work autonomously, like I said, for hours at a time, for a week at a time by the end of next year. So it helps a lot to start thinking about AI and agents as digital employees. That have specific jobs and actions and their own internal logic and planning and acting mechanisms.
And unlike humans, they can work for 24 7 continuously. They can replicate [00:04:00] instantly. They can generate economic output just like human workers, but with no physical limits. So. We are soon moving to a world where the companies will have both digital employees and physical employees working together.
And soon managers will be leading, not just humans, but also agents. So as a manager, I could have 10 humans and a hundred agents reporting to me, and it’s not sci-fi. It’s gonna happen next year where the org charts we’re gonna start showing human workers and digital workers being managed by humans.
Luke: And having kind of those specialized kind of roles, right?
And, and access and permissions and functions and that type of thing. Absolutely.
Manoj: And I think what it does to the org chart in fact I think it was Sandeep and it was Google Sun Hai who said that actually within a year many of the AI can do a lot of the functions. The CEO does quite well.
so I think it’s again, a whole new form of intelligence and a whole new form of worker that is going to arrive in companies [00:05:00] and how we evolve them, how we put that to work. I mean, one of the things I like to say is you have an HR department of a humans. What does an HR department for agents look like?
Are you prepared for it? How do you drug test an ai? How do you give it an employee handbook? So it knows how to behave. How do you do a performance appraisal and when it goes off guardrails, how do you fix it? Those are the kinds of, I think, exciting things that we will have to, as a society, build and then create that partnership between people and AI as we go forward.
Luke: It’s fascinating stuff. I mean, and, and you’ve, you know, your, your work’s kind of taken you from working with IBM Watson to what you’re doing on trust wise now and, and have been really focused on him and making AI systems work with humans and not just for humans. And as we’re kind of talking about the second workforce and the way that humans interact, like what does a healthy partnership between people and AI look like in your mind?
Manoj: Well, I think, you know, AI is gonna be one of the most powerful things that will make us more human. I think ai, [00:06:00] I’m, I’m a huge fan of jazz. So if you look at sixties and seventies, what happened to jazz, there was this massive explosion of creative output because of the, you know, invention of the electronic synthesizer.
‘cause now, if you’re a jazz musician, you don’t have to have six people to bring together and pay them to create music. With the synthesizer, I could create it. I look at AI as a synthesizer for our humanity. It is gonna be a, a tool that will help us compose and create, show more, you know, courage, more creativity, more love.
Everything that makes us more human is what AI is gonna allow us to do. If you use AI as a, as a tool that can amplify our skills. So I think the best human teams. Are going to use AI as a tool that can do speed you know, analysis with speed, scale, and precision. But they will combine it with their own creativity and intuition so that you get the best of both human and machine scale intelligence.
Luke: I love it. Love the [00:07:00] human synthesizer. These examples are excellent. I mean, ‘cause they do change the game a lot and in ways people don’t necessarily think about either. So it, it is really, it’s companies are gonna change
Manoj: more in the next five years than they did in the last 200 years. There is no, in terms of what’s coming and I don’t think people really have the situational awareness of it and, and, and programs like this.
That’s why I’m excited about podcasts like this. I think, which gets the word out as one of the most exciting times to be alive.
Luke: It’s wild too. ‘cause I’m, I’m already even seeing, you know, other parents at my kids’ baseball games using chat, GBT like a verb, right? Like, which is really fast, you know?
And, interesting. And yeah, I mean when you know, you’ve kind of also described AI as all engines and, and no breaks. Can you kind of unpack what you mean by that and what the implications are for businesses and society and regulators? You know, I’m glad you
Manoj: asked that. I mean, this was one of the reasons that led me to come out of teaching.
I was teaching at University of Texas, Austin, and at University of Cambridge, the responsible ai when Chad GPT got launched. And [00:08:00] what I saw was this focus on building bigger and bigger. Reasoning engines and almost like bigger and bigger nuclear cores, but no one is putting a dome on it because these things, they can act, they can learn, they can adapt, but you know, it’s like building a giant car with a massive engine, but someone’s forgotten to put a steering wheel and a brake, and seat belts and emission control on it.
So now I think you’re looking at a point where these systems are not only just reasoning, but they’re getting autonomous and they’re now entering the physical world into hospitals and factories and banks. And a single error with that giant machine can propagate massively into a huge system failure. So it’s, it’s almost like, you know, to use another analogy, it’s like we have launched a Ferrari into a crowd with no seat belts, no dashboards, and people are saying, let’s go pilot this stuff.
So I think what is needed is a combination of a control system that goes with this powerful intelligence and [00:09:00] autonomy. And that’s what led me to start trust wise as the next company to say someone’s gotta put these steering systems in control systems. And that’s what we are you know, focused on doing.
Luke: Yeah. And, I would imagine that that’s gotta be pretty, you know, robust offering too. I mean, just because when looking at how things are iterating, you know, you’ve got, you know, things were very prop oriented. Now we’re looking at things around like a genetic. You know, functionality and integrations into things and, all sorts of new kinds of like, risks start to emerge.
I’m seeing people that, are using, prompts every day, chat, GBT or whatever, every day now saying like, well, I’m a little reluctant to lead an agent. You know, have all of this access to everything. I mean, when you’re looking at this from your perspective, are you guys kind of factoring in how these different applications of AI are, going to be impacting the guard rails or, I mean, impacting, you know, the end users or, or making things safer?
Like I’d imagine it’s just can be a hell of an exercise to try and kind of like mental model all of this, but then also kind of build around it. Right.
Manoj: [00:10:00] I think, listen, the reality is AI is nothing but a tool. It is not good or bad. It is what you have told it to do. Today we have told AI models to go learn everything on the internet and then behave accordingly.
So it’s picked up the best and the worst of humanity, but there is no steering wheel, there is no alignment on it. Mm-hmm. The problem is not prompting. The problem is not the size of the model. The problem is alignment and autonomous behavior. How do you align the AI to what your business goals and your societal intent and your customer output intents are.
So one of the things we are not used to doing is defining intent, and that’s one of the problems with ai. ‘cause in the past, I mean, AI is not an app. AI is an actor. It’s evolving all the time. Unlike an app which sits there waiting for an instruction, and AI is active, it’s figuring things out, and it’s growing, and every hour an AI is a different thing.
So what we need to do, that’s why, you know, this whole [00:11:00] notion of a digital worker becomes important. We gotta treat every agent like a digital employee. Give it identity, give it a permission, give it a performance, a scorecard. Create audit trails. Create performance monitoring systems, and then before you launch it, do a drug test of that agent, do a 90 day probation through assimilation to make sure that it is aligned to the intents and your employee handbook and not after the damage is done.
And we don’t have the infrastructure today to be able to do these things. Mm-hmm. And that’s where the missing piece is. We don’t have a control tower type of an infrastructure that allows us to. Onboard an ai, give it a set of intents as policies and controls. Red team it and test it and simulate it to see how it behaves, launch it, and then monitor and look at the drift and performance of it and continually update it.
It is this layer that is what we call as a safety and control layer. Which, you know, very much [00:12:00] similar to what Elon Musk had to do with Tesla to make self-driving. Mm-hmm. What he had to do with was full self-driving capability. Think of it like a full self-driving capability for agents within your company.
That’s what we are building right now as a new infrastructure.
Luke: That’s awesome. Yeah, I think it’s very, very much needed especially with just how fast adoption’s taking place, right? Like, and, and a lot of these things people are probably already in over their heads a little bit too far are sharing things that might not necessarily be like the wise to put into the system, right?
Like, but but. Well, another area too that seems to be getting a, a, a lot of attention especially given how connected industry is a, around the AAM market is, this whole idea of you know, carbon footprint and impact on the environment from all these data centers and, and all this compute and everything like that.
are there strategies for kind of building AI systems that are both efficient and environmentally responsible from your point of view?
Manoj: Absolutely. So I think AI is one of the, you know, the [00:13:00] limitation of AI actually is the amount of power we are gonna feed it. And now this is at a point where these models are getting so big that they need immense amount.
That’s why you see Nvidia and all the investments and data centers and energy power plants, people are looking at. Putting new nuclear power plants in place. And all of these things are used to power these millions of agents that I, I was talking to a large global conglomerate who said, by this time next year, they will have a hundred thousand agents running around their company.
Wow. So each of them are gonna be consuming a lot of compute at the cloud level and a GPU level. And that means that compute means energy, and energy means electricity and non electricity you know, sources, and then that means carbon footprint and emissions. And there is a tremendous amount of carbon footprint and water footprint that these systems put in place.
And to me, efficiency is not a nice to have. It’s a safety requirement. It’s a safety for the environment. And also if you don’t have agents that are a, a steering wheel and a [00:14:00] seatbelt on it, they could go away into a runaway loop. And suck up a lot of the cloud compute spend, and they could create a lot of carbon emissions.
So what is needed, again, is building AI systems that are not just safe from a output and action perspective, but also efficient from an energy and carbon perspective. So bringing those two things together is what we call as trustworthy ai. Is it safe, is it efficient, and is it beneficial?
Luke: It makes a lot of sense too.
I mean, I think that, a lot of, with things that you might wanna constrain an agent, you know, from, from doing also bring efficiency and, and, and take up less resources. Right? Like, and I think that’s one thing that’s kind of becoming obvious is that the unknowns around what you could have an agent do if you don’t have the right kind of safety measures in place can be like huge, you know?
Yeah, I know. We’re seeing that on the browser side, you know?
Manoj: And a lot of people confuse. AI safety with AI security, right? Mm-hmm. There are two very different and AI control. [00:15:00] I like to say that I can build you the world’s most secure prison, but if you have a bunch of Chuckies and Honeywell LECs on the inside, you’re gonna still have a bunch of problems, right?
So AI systems are like these chuckies with a credit card in one hand and a knife in the other hand, and are being launched into your company. And if you don’t have the right way to guardrail them and control them. You’re gonna have chaos on your hands. And Gartner in a report said by end of next year, 80% of AI risks and disasters will come because of internal misalignment of ai, not external security threats.
And I think that’s something that people are just beginning to get their heads around that these are autonomous entities that are running on their own and they need to be guardrail. Otherwise they could have a massive insider threat that they have not thought about.
Luke: That’s a really important distinction too.
‘cause I feel like in a lot of the discussion around like kind of content moderation and other things like that, you know, safety is kind of in blanket used as a, almost like a negative thing [00:16:00] when really it’s, there’s so much more, you know, that, that, That is around safety with these things yeah, I think you’re, you’re, you’re right on the money about people finally kind of starting to get a, glimpse of what that could be.
But I still feel like it’s barely scratching a surface as to like what the threats are. I mean, it’s pretty fascinating area there. You know, listen, I mean, it’s us
Manoj: also into this world of abundance. I mean, in the next 10 years, we will easily see human lifespan being expanded by 10 to 15 years.
So, you know, I hope, you live for another 10 more years, all of us then we can easily extend by another 15 years.
Luke: What do, what do you think the big accelerator is for that? Sorry to interrupt, but I’m just really Yeah. Oh, absolutely. I
Manoj: think the whole notion of personalized autonomous pathways.
For your own body’s chemistry, your body is a system that biology today is opaque to math. We don’t understand what’s happening within my, you know, my body. So an AI can understand protein folding, DNA, chemical reactions, and [00:17:00] imagine if you can build a model. A, a child born, by 2040 will get a phone, which will have a little application on it, which will essentially be a DNA analyzer that will grow with the child, which can tell you at every minute what’s going on with each of their lungs.
Imagine having a language model for your heart. A language model for your liver and a language model for your body where you can chat GPT it and say, I wanna live another 10 years longer than my average lifespan of 80. What are some of the things I’m not doing right now that I should be doing? And it’ll come back and say, looking at your body composition and your family and your food habits and your workout, here are seven things you can do to improve the quality of you know, the liver and the cardiac, performance and stuff like that. So personalized medicine at scale, personalized treatments at scale, personalized interventions at scale. These are all the things that AI is gonna enable that will drive human lifespans to be longer, [00:18:00] whereby on average 10 to 15 years. Hundred within a generation. It’s
Luke: like Gatica coming to life, you know, like, but, but well, seriously though, I think, I think, you know, there has been this trend around, around health and and wellness, around looking at things holistically.
But you know, what tends to be the gap there? Is that, yeah, I can talk about that and, and, and try to mentally model that, but like without ai, it’s really hard to track all of those systems, right? Like, or, or to, you know, be able to model things out holistically. This unlocks all of that, right?
Manoj: Yeah. Think of a chat GPT for your body, you know?
Luke: Yeah. Yeah. That’s awesome. I, well, and, and also kind of looking back at your work, you know, building IBM Watson and, and, and, and in those earlier days, like, what lessons have been consistent and kind of still resonate today for organizations that are working with AI and kind of integrating it into their operation?
Manoj: Yeah. You know, when you look back, it’s hard to believe it’s 10 years or 10 years now, when. Watson was sort of the [00:19:00] big bang of AI when it got back into vogue and people started understanding the art of the possible with it. And a lot of great lessons, a lot of great scar tissue and like with any, anything else.
When you look back, one of the first lessons is that. People will not adopt technology unless you can trust the output or you can explain the decisions. Is one thing to come up with a question answer machine that can play a jeopardy game. It’s another thing to be able to explain to a cancer patient. Why is it that you recommended those?
Combination of medicines and treatments to me, so explainability and trust in the outputs is one thing that we have really looked at. The second. So, so it really comes down to human trust in these AI systems. Mm-hmm. That was the first lesson from Watson. The first the other lesson with Watson was that AI succeeds when you go deeply into a domain and put discipline processes around it.
It’s not magical. So this notion of a GI, artificial general intelligence where people believe that you will have one AI that [00:20:00] will understand plumbing as well as it does po I’m like, I look at it and say, why? You know, you can have narrow a GI deep into a domain, and I think Watson taught us that is you gotta really go deep into a problem domain and solve that.
The other thing that Watson taught me and the rest of us is you’ve gotta approach these technologies as a capability and not as a project. It’s like electricity. It’s like email and I tell, you know, a lot of boards that I speak to, then I tell them AI is too important to be left to the technologist.
This is a business capability, a transformational capability, and you gotta approach it outcomes and capability in rather than data and models out. Mm. So most people are looking at AI through the wrong end of the telescope. They start it as a data and a models problem, rather than a human impact and a business outcomes problem first.
Luke: I think that makes a lot of sense and it, it really does add a whole new dimension to, to, to life when you’re [00:21:00] working with it. And I think it’s also one of those things where it doesn’t really take a lot of, of, of use to, to kind of see that potential. Right. Like, are you seeing a shift when you’re working with.
You know, whether it’s like customers or partners or, or the folks that you’re working on, like, like how, how, how big are the gaps? Like, are, are there, are there mistakes and, and things that you see companies kind of repeating, you know, when they’re deploying AI at scale or, or, you know, are, are, or companies like iterating more quickly to kind of, get past those mistakes?
Manoj: Yeah, I think in a broad sense what I’m seeing is a lot of capabilities that still have to be built within companies to catch up to these technologies. Hmm. It’s almost three years since Chad GPT got launched. Why is it that we don’t have AI at scale beyond some internal projects? Mm-hmm. People are stuck in, you know, what someone called as a pilot purgatory.
Because people are, you know, it’s less than 5% of the projects according to an MIT report are going into production. [00:22:00] Mm-hmm. It is because people are beginning to understand that AI is not an app. AI is an actor. Mm-hmm. And for an actor, I need to then find a whole new method. Of how I onboard it, how I drug test it, how I give it a handbook so that it can behave properly, and then how I control it while it’s running.
So this entire middleware, we call it the AI control tower, is missing within companies they don’t know. How to bring it in, how to guardrail, how to simulate the behavior, how to read the performance, and then when it’s in production, how to manage the drift of it. So I suspect that companies will stay in this mode for another year to 18 months before new technology will come up to solve it.
And interestingly enough, the way to fix ai. I believe is through AI is to use what we call as guardian agents. And it’s a term from Gartner that you will need guardian agents to manage worker agents like supervisor agents [00:23:00] who can then kind of steer you and make sure you’re do and that capability companies haven’t understood yet.
And and I think that’s the transition when they start understanding it and implementing the systems is when they will start coming out of the pilot Purgatory. That makes sense. I mean, is gonna make this thing worse, by the way. Oh really? I’m a massive pullback. I’m expecting a trough of disillusionment and a bit of a, a market correction around the agent hype.
Yes, a lot of these things are possible, but like I said, it’s like hiring a whole bunch of smart interns. With a knife and a credit card, and they don’t know how to work in a bank, how to work in a healthcare company, how to work in a retail shop. Yeah. So how do you train that person? There is no model for that, and that’s what’s
Luke: missing.
And it is an interesting, I mean, this is what I really like how you frame it as an actor, right? Like, because I, I, I, I try to think back and when have I seen such a top down push, right? And, and not just top [00:24:00] down in, in, in the Fortune 1000, but also like at the government level. Where with policy and, and with the infrastructure and, and all of that, and even national security, right?
Like where you’ve got this mandate, you know, to get all the, the, this new technology in that there’s still not a great, I mean, a lot of these people that are implementing it weren’t necessarily AI people when they were, you know, developing. And so there’s this kind of this, this, this bit of learn and, and, and.
Lack of market fit with some of these things. But I think the actor framing is really useful for that because it kind of forces you to think about it a little bit differently. And, and I, I, the Guardian agent concept too, right? Where you could have, like have an agent that’s kind of watching after your kind of functional agent and it’s saying, look like, you know, predict, help me predict out like where the things go wrong or, or you know, kind of.
Like a, a safety net, you know, in a way, like, and I know it sounds like super complicated, but it actually like, makes a lot of sense when you really think about it. That’s really fascinating framing of that. Thank you. Yeah. I, I, well, and, and, you know, you, you hold a lot of [00:25:00] like, AI software pens, what is it, 34, right?
Like, or something like that. You know, how do you, how do you view the role of intellectual property in, in shaping the future of AI innovation?
Manoj: Well, you know, the great thing about patterns, it, it allows small startups and companies to punch way about their weight loss. And I think AI and the need to manage autonomous workforce and agents with contracts and agents with emergent behaviors that are talking to each other in non-English language because they feel English is not efficient.
So all of these behaviors lead to a massive frontier for building new ip. Because there are these whole new AI ecosystems that are coming up that need guard railing, that need controlling, that need cost and carbon management. So the, the next decade of IP and the next decade, decade of patterns are gonna be more around managing autonomous workflows, managing autonomous workers, and not the patterns I had in the [00:26:00] past, which were more around.
Managing how a webpage is published, how a website is personalized or how a service oriented application is built. This is managing and scaling digital workers. And which is where I think you know, there is a tremendous amount of innovation opportunity and we will see companies in the next five years with less than 10 employees who will be worth over a billion dollars and many of them, many companies.
Wow. Yeah.
Luke: Well, and, and, and, and this might be a little like a, a surface area ish question, but I’m kind of curious for your take on this because we’ve been talking about, you know, a, a lot of these agent actor concepts. Right now it seems like there’s a lot of talk around you know, oh, we should we should be, you know, almost like a accreditating or, or or, or giving credentials to you know, like.
Can I get a, a licensed agent or how do you see that playing out? Do you see agents kind of getting accreditations for things or, or do you see it more being like, this agent was created by a lawyer that was, you know, part of the bar association [00:27:00] or, or whatever. Like where, where do you see that paying it?
Do you see these actor agents becoming, you know, regulated entities themselves? Or, or like, how, how, how are you mentally modeling that when you’re thinking about this stuff? No,
Manoj: absolutely. So it’s a great question by the way, and I think within the next few years we will see a LinkedIn for agents where people will go in and start describing my identity, what my capabilities are, what accreditations do I have.
In fact, on December 9th one of my other projects that started nine years ago is a nonprofit called the Responsible AI Institute. The Responsible AI Institute in collaboration with NHS England, which is the healthcare group outta uk and the University of Cambridge. We are launching the world’s first credit scoring or a risk scoring mechanism for agents called a trust score.
Oh, wow. So, you know, just like humans have A-F-I-C-O score, we are launching the world’s first Asian Trust core for healthcare. Oh, that’s awesome. And you are then able to take any employee or any agent that you buy or [00:28:00] build and you’re able to evaluate it, you’re gonna rate it and grade it, and then you can decide whether to put it to work or not.
And that’s coming on the 9th of December.
Luke: That’s very cool. Very cool. And this is, this is like, it is gonna be really interesting because if there, there’s just all these different concepts that are kind of, hitting at the same time, I feel like, you know, not only, oh, can I, can I prove that this was from an agent that this person owned, but also like, you know, what’s the quality of that?
Right? Like, what are they rated at? It’s super fascinating stuff. Listen, we,
Manoj: we have some on the whole new species. We have someone, the whole new intelligence, AI is not just artificial intelligence. Sometimes I call it is alien intelligence that we have summoned in silicon and now we need to guide it, guardrail it, control it, and and, and manage it.
And that’s what the opportunity and the threat ahead for us.
Luke: I mean, and just kind of looking ahead too, like are there, what other areas of AI do you think are, are really ripe for, for breakthrough patents that, that you know, maybe we, some, we talked about some of them, but maybe there are some that we, we weren’t really thinking about.
Manoj: I think the number one thing, and that’s where, you know, [00:29:00] not because I’m a masochist, I’ve gone out and created my fifth company even trust wise, is because this whole trust layer this control system for AI agents where you’ve got hundreds of thousands of agents running around, how do you. Where is the hr, where is the workday of agents, right?
Mm-hmm. I think that’s one part. The other part is efficient pipelines that reduce energy and cost and carbon so that you can really compress down the time for, you know, processing these things, but also reducing the energy part of it. I think there’s a massive amount of innovation in the area on how do you express human intent.
Into a format that agents can then autonomously understand and execute. So the human machine interface around intent management and goal management, I think is another one. And last but not the least, multi-agent governance and coordination. So the managers of tomorrow, like I said, will be managing both human people and agents and just like they do performance appraisal [00:30:00] reports to their board.
On their teams. They will also do it for the digital team. So there is a tremendous amount of areas. There’s never been a better time to be an innovator and an entrepreneur than it is now. It is just that we have to go through a little bit of capability building before all of this can be built and, and put to work.
Luke: It is wild. I feel like I’m hearing, when I hear you talk about this, I feel like I’m kind of like, hearing, you know, kind of the roadmap for where all, all, all sorts of disruptions can take place. Like even even things like, you know, DAOs, like, you know, where you can have these autonomous organizations, like you can have agents voting in DAOs, right?
Like you have all sorts of things happening.
Manoj: We already are seeing the beginnings of it. Mm-hmm. And you know, about eight years ago I did a TED Talk where I talked about. The end of homo sapiens and the beginning of Homo Digitalis. And, and that’s what this is. We are the last of the generation that, you know, and that iPhone is the beginning of Homo Digitalis.
Mm-hmm. And and that smartphone, and now it’s only gonna take it to the next level where this will start [00:31:00] getting embodied into. Robots and vacuum cleaners and cars, and it’s gonna be around us like electricity is. So it’s a fascinating and exciting time and like I said, in the next 10 years there will be more change in companies.
There’ll be more millionaires coming out of the economy in the next 10 years than we had in the last 200 years. Wow. There’s amount of innovation that’s about to happen.
Luke: This has been just like a, a really enlightening discussion, minosh. I really appreciate you making the time. Where can people follow your work if they want to, you know, dig in more on this stuff.
Manoj: Well, you know, I, I’m on LinkedIn, so, reach out to me on LinkedIn and I publish a lot of blogs at at Trust Wise, it’s Trust wise.ai, and then also at the Responsible AI Institute, which is my NI nonprofit, and that’s my life’s work. That’s at the responsible.ai. And then I use Twitter, but not as much, but mostly it’s LinkedIn and then the blogs and stuff that I publish.
Luke: That’s awesome. I [00:32:00] will be sure to include those in the show notes. And m minosh, it’s has been fascinating. I, I really appreciate you making the time. You know, you’ve really got an awesome background and, and a good way of framing all this. And I wish you best of luck with trust wise and love to have you back on too, to kind of, you know, cover anything, whether it’s, you know, the non, the nonprofit work or wherever the institute would cover the nonprofit.
Manoj: I would love to that my life’s work and I and, and, and again, I appreciate the opportunity here and, I really think we are just at the cusp of something so massive that more of conversations like this can help and listen. I don’t look at this as a job. I look at this as a duty. I just had a grandchild and she’s gonna ask me 10 years from now.
You were there. Why didn’t you do something about it? Because this stuff is gonna create a lot of opportunities, but a lot of chaos also. Mm-hmm. There’s gonna be a lot of chernobyls of AI that we are gonna experience and it’s it’s our duty to get ahead of it and try and control it. So I appreciate the opportunity and the platform and would love to collaborate further in the future.[00:33:00]
Luke: It’s our duty to get ahead of it. I love that. Alright. Right on Minaj. I really appreciate it. Thank you so much again and we’ll, we’ll try and have you back on soon.
Manoj: Alright.
Luke: Thanks for listening to the Brave Technologist Podcast. To never miss an episode, make sure you hit follow in your podcast app.
If you haven’t already made the switch to the Brave Browser, you can download it for free today@brave.com. And start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads trackers and other creepy stuff following you across the web.

