The Human Layer in an AI-First World
Speaker: You’re listening to a new episode of The Brave Technologist, and this one features Joseph Ning, a technology executive, AI governance strategist, and an adjunct assistant professor at NYU. He’s the author of The Hybrid Mind, which argues that institutions must redesign their decision architectures as humans and intelligent machines begin operating together.
A former enterprise AI leader, Joseph, now works on the front of your applications of ai, including genomic intelligence through initiatives like Gene Genius. His work focuses on building trustworthy AI systems by embedding governance, transparency, and human supervision directly into the architecture of intelligence systems.
In this episode, we discussed how traceability and isogenic ai and how it changes accountability and how we can build. Trust into architectural design, how governance is not friction. It’s a part of infrastructure and why most organizations are failing at building trust by focusing on the model instead of the system.
and ways that human judgment is shifting as we integrate our lives with? AI as our [00:01:00] partners. I think this is a really timely conversation. There’s a lot of, rubber meeting the road as far as how AI is being used in, decisioning systems from everything from, you know, the governmental agencies to companies and every field basically.
And I think this approach that Joseph gets into is, It’s really timely to kind of go into this, and I hope everybody checks it out and checks out his book. now for this week’s episode of The Brave Technologist,
Joseph, welcome to The Brave Technologist. How are you doing today?
Joseph: Doing great.
Speaker: Awesome. Yeah, I know our team’s connected at the AI summit in New York, and so I’m glad we could actually uh, kind of make this happen and get you on the show.
Joseph: Wonderful. I’m happy to be here.
Very excited to have this discussion.
Speaker: I think you’re the first guest in, in a Botox too. I love it. It’s really awesome. let’s just jump right in. Um, you wrote the book the Highly Bird Mind and I hope everybody kind of checks it out too, by the way. But your, your book explores kind of how AI is changing the way decisions are made inside institutions and systems.
In your own [00:02:00] words, you know, what is the hybrid mind and, and why is the concept important right now?
Joseph: The hybrid mind is the idea that the future of decision making isn’t human or machine. It’s actually both operating together inside the same system. For years, AI has been a tool that produces insight.
Today we are entering a world that AI systems can actually participate in operational decisions. They monitor systems, generate options, and sometimes act right?
Speaker: Yeah,
Joseph: that changes everything. When intelligence becomes operational governance can’t live in policy documents anymore. It has to live inside the architecture of the system.
The hybrid mind is really about supervised intelligence. Humans define the mission, values, and boundaries. Machines operate at a scale inside those guard rail. The challenge now isn’t building smarter models. The challenge is building institutions [00:03:00] capable of supervising intelligent system responsibly.
Speaker: Gosh, this seems timely. I think,we, we could go in a bunch of different directions with this. I think where this is just like the absolute right time to kind of delve into this because, you know, we’re, we’re seeing now everything from kind of defense or department of war, I guess we’re calling it these days.
Decisions around like autonomous, you know, AI use in, war fighting, but also on more of a, you know, privacy side or medical side and, and decision making. And, on the eugenic side too, like with, people doing things that, maybe without people having a gr the best understanding of, of how they work or what the, you know, downstream of.
Effects might be.
what type of thinking goes into, these things? Is it about having staff really get up to speed on how the technology works
Joseph: if you’re in an org and they’re hearing about this, aside from reading the book, of course, you know, like, what, what direction might be, be good for them to start in, Great question. A healthy partnership between humans and AI is about dividing [00:04:00] responsibilities based on strength machines or AI are extraordinarily great at pattern recognition scale, and probabilistic reasoning. Humans, on the other hand, are great at context, ethics, and understanding consequences. In the system we design, AI doesn’t operate alone.
We use that what we call multi-model console. Instead of one model making a decision. Multiple models analyze the same situation from different perspectives, prediction, reasoning, anomaly, detections. Then with a supervisory layer, we evaluate the outcomes. Humans still remain at the final authority for high impact decisions.
So the goal isn’t replacing any one. It’s creating a hybrid system where machine intelligence expands. Human judgment. [00:05:00] So more in terms of humans in control.
Speaker: That makes a lot of sense. I think are there measures for accountability and things like that? Because I feel like that’s also an area where it’s either overlooked or, or not understood Very.
Joseph: Accountability doesn’t disappear when AI acts. Right. Right. It actually becomes more structured. The responsibility doesn’t sit with the model, it sits with the system and the institution deploying it. That includes the people designing the architecture, the governance policies, and the supervisory mechanisms in systems like Gene Genius, we design what I called traceable decision pipelines.
Every model output, every recommendation, every intervention can be reconstructed. If you can’t chase the decision, you don’t control the system. Agent AI changes the accountability equation because actions, multi amplifies consequences. That’s why governance has to be engineered into the loop from [00:06:00] the beginning.
Speaker: I feel like there’s a real kind of, desire for greater accountability, but it doesn’t seem like the systems that, you know, beyond the governance of the AI system, but like the broader system, whether it’s like to tech institutions as a whole or, governments or whatever, like, are really well equipped to handle, okay, well, well, well now what?
when something bad goes bad, something can really go bad. And I mean, what’s your take on how those institutions, should be thinking about or approaching accountability,
Joseph: I’m glad you brought up the word trust, because trust in AI isn’t a brand message, it’s a system property actually. [00:07:00] Building trust into architecture means designing systems that are explainable, traceable, and interruptible from the start. For example, when we built the systems within Ingenious, we don’t rely on single model making biological interpretations, right?
We use multi-model consoles approach where multiple models evaluate genomic signals before insights can surface. Then those insights pass through a governance checkpoints. That means we can trace how biological insights emerge, what data influences it, and which models contributed to the outcome.
Trust happens when the system can explain itself.
Speaker: I see.
Joseph: It’s a, yeah, it is an architectural design.
Speaker: Yeah, no, I mean that’s kind of one of the things too that seems like really key here with, especially with the AI side, is like having really great people on the architectural side of things, you know, to really think about how these models are constructed versus kind of like, you know, the age old you know, move fast and break things, [00:08:00] approach, like, I mean, if you’re, that part will be there, but if you’re doing that within a system where the architects know.
What the heck they’re doing. You know, it seems like a better trade off or a better running start, I guess I would say. where do you think that most companies are, are getting us wrong today on this side of trust? Because if there isn’t a great system for like, holding these companies accountable.
The public trust in these companies and their reputation and kind of like, the business outcomes from that seemed like kind of like a natural place where, you know, that defacto standard’s gonna be there for accountability. But where do you see these companies or, or organizations kind of getting the trusting wrong these days?
Joseph: Most organizations focus on the model instead of the system. They ask questions like, how accurate is the model? How big is the dataset? Those are important questions, but the more important ones are who supervised the system? How do we audit decisions? How do we intervene if something goes wrong?
Speaker: Hmm.
Joseph: A lot of companies treat governance as [00:09:00] documentation.
they write responsible AI policy and think the job is done. Right.
Speaker: Yeah.
Joseph: But governance has to live inside the infrastructure. If governance lives in A PDF, somewhere in your architecture, you don’t have responsible ai, you have exposure.
Speaker: That makes sense. That makes sense. Yeah. I mean, stuff moves so fast. It’s like, you know, it all feels like, that documentation is feeling like, kind of like the terms of service or, or, or, you know, check boxes or, or something like that. You know, there’s a common perception that governance slows things down.
From your perspective, is governance currently slowing down innovation?
Joseph: Well, governance only slows innovation when it’s reactive, right?
When governance is built directly into the architecture of a system, it actually accelerates adoption. Institutions adopt technologies. They actually trust in industries like finance, healthcare, and genomics.
Trust determines whether technology scales gene genius. We, we see this very clearly. When [00:10:00] biological insights are produced by ai, scientists want to know how the system arrived at the result. Governance creates that confidence. So governance isn’t friction is actually infrastructure they need to be the most trusted in your organization.
Speaker: We haven’t really had anybody on from, your side of the space before. So, would you mind kind of going into unpacking a little bit about what you do at Gene Genius and what your focus is on?
Joseph: Yeah. What excites me most about genomic and AI is what we’re beginning to transform within Gene Genius, which is transforming raw genetic data into what I call biological intelligence.
The human genome contains enormous. Complexity, millions of interactions between genes and biological systems. AI allows us to detect patterns that humans alone simply cannot see. But because those insights could eventually influence medical research or treatment decisions, governance becomes critical.
That’s why we designed [00:11:00] systems where multiple models evaluate genome int intelligence and. Signals and where you know, every insight can be chased and interpreted. The goal isn’t just faster discovery, it’s trustworthy discovery.
Speaker: Yeah. That’s fantastic. Yeah, I appreciate that because I think it’s one of those things where you all are butting up against, health and medical and academic, communities and also like, innovative research and there’s a lot of, Different you know, a lot of different, whether it’s regulations or different rule sets, you all have to think about. and I think it’s kind of good color for the fact that, architecture is, key, you know, around not only having systems where these things can interoperate, but also systems that people can trust.
I’m really curious too. So are you hearing from folks like, is there an effort. For, this approach to kind of get more widespread beyond kind of genomics and these, industries and, and get adopted more broadly. Are you here, are you talking to people about that, like, that might be working in different, you know, [00:12:00] areas?
Joseph: Yeah. The biggest lesson from fields like finance and genomics is that AI must be treated as a critical infrastructure.
When systems influence markets or health outcomes, the stakes are very high. That affects a lot of society, a lot of humans. That means AI systems need governance frameworks, resilience testing and traceability mechanisms.
We can’t treat AI like a simple software feature anymore. We have to treat it like infrastructure and that powers institutions.
Speaker: Are there any things that you’re all working on that that you know, could be impacting everyday users? You know, in the near future?
Joseph: You know, everyday users. One of that, those areas that, that, that’s why it keeps me excited about Gene Genius, right?
We’re building the systems that transform raw genetic data into what we call biological intelligence. The challenge with genomic data is that it’s, like we were saying, incredibly complex.
These are millions of potential interactions inside the genome. AI allows [00:13:00] us to detect patterns that humans cannot see.
Using these architecture of multi-model console, we’re using it to analyze the genomics signals, validate patterns, and surface insights that researchers can interpret it over time. This kind of system can support precision medicine, early disease detection and personalized treatments, but because these insights could influence health decisions, governance.
And interpretability are absolutely essential.
Speaker: It seems like radically transformative and, oftenunder underappreciated the fact that like, I mean, you know, the early detection point, right? if you’ve got multi multimodel you kind of doing this evaluation, like the fact that.
You, you, it can learn off of all of this data, like, and, spot things. I feel like that’s just totally not getting okay, fine, my portfolio can get managed by an agent. Fine. But like, but what? Gosh, you know, imagine if people had the ability to like, to have better detection for things like cancer or for other [00:14:00] illnesses, and there’s so much that’s unknown, I, I, or.
Not necessarily unknown, but like, people the broader public are unaware of in the medical field, right? Like, and, and, and around, you know, progress being made and, and around you know, things that, that were a real black box, you know, 20 years ago. So it, seems like a really interesting space and especially kind of given the complexity and it’s all like, kind of in our bodies, right?
Like it’s, kind of wild, it, it is pretty exciting. On the human judgment kind of, piece,do you think human intuition or oversight is gonna kind of remain irreplaceable even as these systems become more highly advanced?
Joseph: Thank you for this question. Um, Human judgment remains essential.
Wherever values and trade offs are. Machines are excellent at optimizing objectives sequencing and scaling, but humans decide what the objectives should be. Questions like fairness, long-term societal impact, the ethical boundaries still require human oversight.
In systems like Gene Genius AI detect biological patterns, but scientists interpret [00:15:00] those patterns as to what they actually mean.
So the future is in human oversight disappearing. It’s human judgment moving to higher levels of supervision and interpretation. Machines find patterns. Humans decide what they mean. Human in control.
Speaker: Where do you think humans might be the weaker link compared to machine reasoning?
Joseph: Well, humans struggle with large scale pattern recognition, right?
Prob, probabilistic reasoning. We also subject to cognitive biases and inconsistent decision making. Machines are much more consistent with. Analyzing enormous data sets, but machines lack context and values.
And ethics. That’s why the optimal solution is not human or machine intelligence. It’s a hybrid intelligence where each complements the other.
Speaker: Yeah, it makes a lot of sense. And I think like, especially with all these, like people, there’s a lot of concerns around like a GI and all of these things, but it really seems like [00:16:00] we’re kind of seeing kind of some of the shortcomings of, of what, what, where, where these things fall over when there’s not enough context for machines or like where, you know, humans can play a stronger role and it all seems to kind of focus around like getting the right people in the right roles that can oversee.
like you’re, you know, you don’t want somebody that’s just, you know, captain of a rowboat on a, taking charge of a freighter or something like that, right? Like, where you’ve got all these liability and, and all these, things that can go wrong and, and or go, right. You know, if, if things actually work out.
I mean, is there like a capability that you think AI will have sooner than, than most people expect on whether it’s in these areas or, or more broadly speaking.
Joseph: Well, one capability I think will arrive faster than people expect. Is AI becoming embedded inside institutional decision pipelines?
Much faster.
Right. Instead of just producing insights in dashboards, AI will monitor systems, detect anomalies, and trigger operational [00:17:00] responses in areas like healthcare, finance, genomic, and the research. This is fundamentally changing how institutional institutions operate. The transformation won’t always be visible to the public.
But inside organizations, it will reshape decision making. AI won’t just analyze decisions, it will participate in them.
Speaker: You know, there’s a lot of, a lot of chatter kind of around and discussion around how the impact of kind of a lot of research becoming more automated with ai and with concern around people, you know, offloading a lot of like critical.
Thinking and research and things like that, and people being concerned around that. are people becoming dumber because they’re, letting the machines do more the thinking for them. Well, really curious, kind of your take on that, like, because in these hybrid approach it seems like a tool that you’re working with, but these are things that aren’t really talked about that deeply with folks that are working on this stuff.
So I’d love, love to give your take on that if you don’t mind. Sure.
Joseph: What we call this in terms [00:18:00] of learning, right, is actually cognitive offloading in ai. So this is when we delegate parts of our thinking to machines to reduce mental workload, right? Actually, humans have been doing this all along. So writing offloads memory calculators, offload arithmetic, and GPS offloads navigation.
With ai, we’re beginning to offload parts of reasoning and analysis. But AI should also function as a learning scaffolding. It helps guide people while they develop skills and understanding. Equally important is reflective learning. AI can help us review our reasoning, test ideas and think more critically about how we reach that conclusion.
So the goal is to not stop thinking. It should extend human intelligence, scaffold learning, and encourage reflection. This is what I call hybrid learning or hybrid intelligence.
Speaker: Excellent. And I think that’s great. I, I think this kind of fits well [00:19:00] with the discussion earlier around kind of architecture.
It’s almost like there’s architecture and composition, right? Like in, and we become better architects and composers around these things as opposed to like, okay, I don’t need to necessarily write out all the long division, I just need to know that I need to apply division. and then, you know.
Double check the answer, right. Or whatever, I think I’m really curious too, I mean there, there’s race to innovation and you know, is there a conversation that you think we’re not having enough of out in the space?
Joseph: Definitely. We spend a lot of time talking about model capabilities and AI safety. But we don’t talk enough about institutional readiness. Organizations actually prepare to supervise intelligent systems. Do they have enough governance frameworks? Do they have oversight structures? Do they have accountability mechanisms?
Because the first major AI crisis probably won’t be technical. It’ll be institutional. We’re building intelligence faster than we’re building institutions right now.
Speaker: Hmm. Very [00:20:00] interesting. Interesting. I, I, are there, are there resources folks could go check out that you’d recommend for, for kind of learning more about this organizational approaches?
Joseph: I just publish a book about the hybrid mind. So the human AI convergence. So that’s a good reference and to what is happening in the world. The future of AI isn’t about machines replacing humans. It’s about humans and machines forming a decision architecture. And that architecture is the hybrid mine mind.
Speaker: Excellent. Was there anything we, we didn’t cover that you, you want people to know about?
Joseph: Yeah, AI is one of the most powerful technologies humanity has ever created, but the rest, the real test isn’t whether we can build intelligent systems. The real test is whether we can build institutions capable of governing them.
Because the future of AI is not decided by algorithms, it will be decided with architecture. [00:21:00] The future of AI is decided by. Who builds the supervision lover layer.
Speaker: That’s a great note to, to, to wrap up on too. And where can people, find you online if they wanna reach out or, or read more of your work?
And aside from the book, of course, which we recommend everybody check out too,
Joseph: I regularly post on LinkedIn a newsletter called AI Rhythms. So it’s a play on words of algorithms, but it is a, a monthly newsletter that talks about various industries and gives a mini talk also within ai. So that’s a lot of information that that is happening in, in this space.
Speaker: Excellent. Well, Joseph, I, I, I really appreciatethe work you’re doing. I appreciate the fact that you came on to talk about it. I think it’s really giving our audience a lot to chew on as far as, you know, thinking about how institutions approach this important topic. And yeah, I hope everybody, uh. does take the time to go check out [00:22:00] your book ‘cause it seems like a really good tool for the toolkit for navigating all this. So, love to have you come back to you know, in the future and check back in anytime the door’s always open. Thanks again Joseph. Really appreciate you making the time today.
Joseph: Love to be here. It was an amazing conversation. It was a very timely conversation too. I agree. This is, this is perfect in, in terms of how we’re moving ahead as a society, as humans, and how we should approach ai. Right. And use it more as a tool rather than a like you are saying, cognitive offloading into the AI space and then the quality of humans.
Needs to be kept and embellished.
Speaker: Absolutely. Absolutely. Well, thanks again Joseph and have a great one man. Appreciate you making the time.
Joseph: Thank you so much.
Speaker: Thanks for listening to the Brave Technologist Podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave Browser, you can download it for free today@brave.com and start using Brave Search, which enables you to [00:23:00] search the web privately.
Brave also shields you from the ads trackers and other creepy stuff following you across the web.

