Back to episodes

Episode 64

Building Trust in AI: Chatbots, LLMs, Decentralized Finance, and JPMorgan Chase

James Massa, Senior Executive Director of Engineering and Architecture at JPMorgan Chase—and holder of six AI-related patents—emphasizes the evolving role of human experts in managing AI technologies and offers insights into the future convergence of AI and crypto. He also shares how major financial institutions are implementing guardrails around their AI systems, using techniques like retrieval augmented generation (RAG) with enterprise chatbots.

Transcript

Luke: [00:00:00] From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist Podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Moltz, VP of business operations at Brave Software, makers of the privacy respecting brave browser and search engine.

Now powering AI with the Brave search. API. You’re listening to a new episode of The Brave Technologist, and this one features James Massa, who is the senior director of engineering and architecture at JP Morgan Chase, and holds six patents covering subjects such as ai, data quality, cloud cost management, multi teacher, LLM, distillation and model self-healing.

~~In 2024, James led a team that won the FST Tech reward for best financial services team. Migrated 53 apps to AWS, published two IEEE papers on LLM blockchain security and presented 14 keynotes including one at U-S-C-S-D. ~~He holds a master’s degree up from computer Science Department of Harvard University and in finance from the city university in New York.

In this episode, we discussed the importance of trust in ai, particularly in the context of blockchain and decentralized finance, [00:01:00] ethical considerations in finance and challenges faced in ensuring responsible AI use. We also explored the evolving role of human experts in managing AI technologies and advice for organizations adopting ai.

And now for this week’s episode of The Brave Technologist, James, welcome to the Brave Technologist. How are you doing today? I’m doing very well, Luke. Thanks so much for having me. Yeah. Thanks for coming on. I’ve been really kind of looking forward to this interview and and having this discussion. Can you tell us a little bit about your background and how you kind of found your way to, to working where you’re working now?

James Massa: Certainly. So, you know, my, my faith and my family helped me with the basics of character, and that helped me in all the usual ways.

Luke: And

James Massa: from there I’ve had more recently some success developing responsible AI from the, the trust perspective. I found that what responsible AI boils down to is AI that you can trust.

So if you think about, for example, when you hire a person to do a job, you’d like [00:02:00] to trust them to do that job. And similarly with the ai, if what I would like to do is interview this LLM that I’m going to be using by asking it questions. If I like the answers, then I’ll do a background check. And if the background check goes well, that’s good as well.

For example, how was the LLM trained and what does it do with your data? You can ask it for references, you can see the experience of others, right? So those are all the similar kind of trust factors. I. Overall, the LLM is doing something that a human used to do. So you’ll, you’ll want the number one thing that you want from humans, which is you wanna trust them.

Luke: Interesting. No, that’s, that’s really cool. Perspective. What do you see as some of kind of the early biggest benefits from ai for the industry or your industry in particular?

James Massa: Well, so again, I like the AI that you can trust, and I’ve been working, for example, on some blockchain papers.

Blockchain is like ground zero for trust issues. Yeah. Because in blockchain there, there’s a lot of malfeasance that goes on. Uh, Many of us know about [00:03:00] the Sam Bateman free challenges, and so some of the papers that, that I’ve worked on have been working towards, for example, establishing trust scores and decentralized finance projects.

That sort of thing.

Luke: Interesting, interesting. Yeah, I I think that’s, that’s a really good point. ‘cause a lot of noise and, and, and things kind of get lost in that noise in, in the blockchain space, but that is ultimately what it kind of comes down to is having this, you know, having this transparent kind of means of doing finances.

It’s, it’s awesome. Anything that you want to go into detail around that Defi side? or the blockchain side? So, for

James Massa: example one thing that we’re looking at is the smart contracts. And the decentralized finance projects around them and trying to combine the perspectives of looking at code vulnerabilities, looking at some suspicious transactions, anomalous price changes to the smart contracts.

I. Social media scam sentiment, that sort of thing. So if we, if we make four LLMs to look at the, from those four different perspectives, and we get a better [00:04:00] viewpoint and a better idea of whether this is a scam, that’s one project that I worked on. And then I gave a, a trust factor with, by the way, I, I worked with Kennesaw State University, so it wasn’t just me.

I, a couple of friends well, it’s always a team effort. A couple of factors by the way, anomalous price changes, social media sentiment hadn’t been looked at before. That was very helpful and it helped us put a trust factor on a, on a defi project, and that’s very good. Then you can know whether you want to use this project and whether you want to take smart contracts from it.

So I, we like that very much. The kind of thing that we can do also from multiple perspectives is if we use LLM distillation, which may be what’s going on with the deep seek, by the way, LLM distillation is this idea that you can go out and you can take a hyperscalers, LLM, and so open ai, think like that, Gemini, et cetera, et cetera.

You take one of those LLMs and you ask it some questions and you get the results, and you save the results in a [00:05:00] training data set. Mm-hmm. And then you take that training data set and you train your own smaller local model, which you don’t have to pay for every single call. To one of these hyperscalers and it runs faster and it runs locally and it’s always up.

All of these are, are wonderful facts and it can be cheaper, you know, so you could see, wow some of these other LLMs that we’re hearing in the news right now may be a lot cheaper working that way, not training with billions of Nvidia dollars worth of

Luke: when did the, the work take place on, on these papers?

Just so the audience can get kind of, an idea of the timing around that.

James Massa: I published a paper. I had to go to Denmark and, and give the paper last summer. So in August, last summer. Oh,

Luke: yeah, yeah. On, on the, on the blockchain work that you were doing or Yes. Or the, that you were, was that recently or I think was,

James Massa: I went to an IE conference and I presented the paper inden in Copen.

Luke: We’re pretty involved in this space, and I think it’s such a cool thing to hear that you’re touching on the social element of it, along with [00:06:00] the, the vulnerability side too, because I mean, like having these smart contracts out there in this space, like when something goes wrong, it can go very wrong.

And the auditing is kind of like marketing in a lot of cases, right? Like so, so much of this is when you have that, you know, trans. Parent, and then you have this like, social layer of people talking and sleuthing and all of that stuff. So, so looking at it from that perspective is really cool. And I think that it seems to be a perception around, institutions like the JP Morgan, that, it, it’s not as embedded or, or working on these types of things.

So it’s super cool to hear that you all have been looking at this from that lens because I have a feeling like a lot of our audience isn’t even aware of that.

James Massa: So it’s, it’s really interesting. JP Morgan’s famous for the Onyx project, which is a. A big blockchain project. People may, you can read about it online.

Luke: Very cool. So, aside from the trust and, you know, some of those benefits, what are some of the challenges that you’re seeing in the industry and in some of your work? Well,

James Massa: challenges, again, keep coming down to the trust. I think, you know, beyond that, it just becomes culture change and [00:07:00] skillset for delivery.

I could tell you about sort of just the basic, very basic building blocks that we run into, for example, with chat bots. Mm-hmm. So chat bots, as you know now, again, a little bit trust oriented, is that the chat bots represent your company as they speak. So we run into some problems if they give away something for a dollar or they give away something that’s not part of your policy to give away.

Luke: Mm-hmm. Mm-hmm.

James Massa: And that can happen by accident or because of a nefarious actors either way, or somebody trying to embarrass you by getting out certain results that they put in the news. All of these things are challenges with the chatbots, so we try to ensure that the chat bots are giving only approved type answers.

Yeah, and I’ll got it. Just go over like how that can happen. Yeah. A technology called to get approved answers a technology called Rag Retrieval Augmented Generation. This is a very key technology these days. What that does is we take pre-approved documents full of information, and we put these documents into something called a [00:08:00] Vector database.

It’s chopped up pieces of the document.

So then when somebody searches for something. The search is turned into a number, if you will, called an embedding or a vector, and that’s, that’s compared to the, the embedding or vector that’s of the chalked up DO document and it finds those two things.

The, the piece of the document, the chunk from the document and the question, and it compares the numbers. And if the numbers are the same, if the vector numbers are the same, similar, I should say close, they’re close in this. End dimensional vector space, they call it. If they’re close, then, then that’s a hit and it’s, it returns that answer and it returns the exact answer from the chunk of the exact pre-approved document, so you’re not just getting random answers from a hyperscalers.

LLM is trained on the whole internet, so it could, it could get the answer, it could be drawn from anywhere on the internet, Lord knows what’s there. It’s much better if you have a pre-approved set of documents.

And the [00:09:00] answers are gonna be drawn from there. And that’s what, how the, the rag documents are honing in on that.

Additionally is a concept of having a firewall. Or guardrails around the LLM. So it means that just as you have a firewall on your internet at home, and it is especially keeping things from coming in and it may keep things from going out, right? Like it probably at your company, at JP Morgan for example, I cannot send personal information outside the network.

It picks that up. It notices that it blocks it. You can’t send the client list out of the company, that sort of thing. And it’s also blocking bad things coming in. You want the same with LLM, the prompts coming in should be reviewed, and this right there, it could shut down certain prompts. You know, we detect this prompt is a nefarious prompt.

So that shut, shut down the prompt on the way in. Then you get an answer out of your approved documents. Like I said chunk the answer that’s approved and then. It will modify the answer slightly just to make it sound like it’s conversational, right? Not just [00:10:00] like bam, bam search result. It is just be a little conversational.

So maybe a couple words tweaked here or there, something could go wrong. So on the back end, you wanna have firewall again, guardrail again to review the answer what’s coming out, and ensure that nothing, you know, bad goes out. Otherwise block it again.

Luke: It’s all happening in just this like, flash of time.

Right? Like a, a, a very, very short Yeah, totally. Yeah, yeah, yeah. Exactly. and I mean, just to kind of drill down ‘cause the, trust keeps coming up, right. would you recommend for folks that are you know, have teams that are getting into ai, like. Is trust something they should be training members of their team on?

Is there, I mean, because you mentioned like checking policies and, and things like that. Like how much of this is uh, technology related versus like training and, teams and, and kind of like education.

James Massa: It’s culture. It’s culture.

Luke: Yeah.

James Massa: Yeah, because you can think about what, you know, good people. You know, good people.

Right. And it’s the same thing with the LLM, the LLMs that you’re dealing with. they’re part of now your team culture and your team values, you know, and things like that. It goes right to the core of who you [00:11:00] are. Are you gonna let a team, a person like this, or an LLM, like that work with you?

Luke: Interesting. Interesting. Well, let’s get into the ethical side, you know, especially around finance, right? Like, how, how do you kind of look at ethical considerations or approach them when it comes to finance?

James Massa: So we already at JP Morgan, and I believe that most institutions like us already have some core values, and we already have experts in certain areas like privacy and security, and, you know, these are, these are key aspects already there.

So what I’m trying to express here, and forgive me for if I’m. Hammering it home a little too, you know, very heavy handedly is, is that the LLM is not something new. It seems real new. Mm-hmm. Because it’s new technology. But what we’re asking it to do is something that humans have always in the main I know, is that we’re asking it to do something that humans would’ve done.

I know, I get that. You know, of course there’s some things that it can do that humans could have never done. But even then it’s more [00:12:00] along the lines of it’s doing things that human couldn’t do because it would take humans too long. But if we were gonna apply a thousand to humans to this job every day

Luke: mm-hmm.

James Massa: What would, how would we expect them to behave? We want the LLM to behave that

Luke: way. Got it. Got it. Okay. So like, are, are there any specific kind of like. I, I guess ethical considerations or areas that you focus in on on the world, in the world of finance. I mean, I’ve had people for example, that work in, in public policy or in, in a lot of like civics in, in, in academia and things like that, that they have certain things that they are kind of like, okay, we wanna make sure that we’re not employment forms aren’t getting, you know, false positive because of some weird parameter that’s set or something like that.

Like when you’re looking at this from a finance. Perspective, like is there anything that, you know, you all really zone in on around ethics that you want? We are, we’re

James Massa: concerned most about the false negatives.

Luke: Mm-hmm.

James Massa: Okay. So we call that in, in AI terms we call that recall.

And we can use a rouge score to, to find that or a blue score.

To find the, the precision. So in, in ai there’s, there’s [00:13:00] machine learning and there’s LLM, but either way you could have a precision and recall from something very confusing called a confusion matrix. Okay. You’ll look that up. Precision is the one that says, every time I give you a result, the result better be right.

I don’t care if you miss some number of results. Just always gimme good results. Now, what, what’s an example of that? If I’m, if I am a financial advisor and I’m going to be calling clients and suggesting something to them mm-hmm. I always want it to be right. I always wanna suggest a a product that’s suitable for them and good for them and that they would like to buy.

I always wanna be right. I want a very good pre, what they call precision about that. And if you don’t suggest to call somebody or other, and it would’ve been good to call them, it’s okay. Just don’t send me to call the wrong client and propose the wrong thing to the wrong client. Right? That’s the precision angle.

Now, say on the other side is the recall angle. The recall one is, I don’t want to [00:14:00] miss anything. I’m in compliance. I don’t want to miss any compliance problems. It would be very bad if we were to miss a compliance problem. Yeah. Or leaving the world of finance for just momentarily. I’m a doctor, I’m looking at the x-ray.

I don’t wanna miss the cancer.

Luke: Right, right, right. Sure, sure. So

James Massa: that’s the recall aspect. You can tune for one or the other. I’m, I happen to be in compliance actually, so a lot of the governance and so forth that I go through when I try to roll out an LLM is looking at whether I have perfect recall. I got a lot of questions.

And very interestingly, how, how, how it goes is they, first they say, how’s your recall? You know, how do you know that you didn’t miss anything? So then we’ll say. Listen for the last, we’ve looked at the last year’s worth of data or some very long period of time, and a human being has combed over it and the machine combed over it, and the machine got the same or better results over this entire period of time.

that’s great. And the first time this happened to me, by the way and I went [00:15:00] through the governance and I thought, score, I’m done here, Luke. And, but then it’s, it is actually even a higher bar than that. Yeah. Then they go on and they say, but what about sustainability? I says, what’s sustainability?

They say, sustainability means that your machine is working for you right now, but how do you know that it’s still working six months from now? Hmm. You know, in a year from now. Yeah. I said, what do you mean? Why won’t it keep working? And they say it’s, it is more like a car than, you know, the other kinds of rules-based software that you’ve been working with all your life, James.

And I says, really? It says, yes, the data, the data can drift on you, for example. Or if you’re using an LLM from a hyperscaler, the, the LLM itself can change even if you’re using the same exact LLM with the same version and so on. They, they can do an update and they’re tuning the parameters back there, they’re tuning the weights and it’s, it is changing the results that you’re getting back.

Luke: Yeah. Well, and sure, I mean, the data updates too, right? the data gets fresh too over time, I would imagine. And, and that impacts data can

James Massa: [00:16:00] change. Yeah. We’re always like buying another company here. Here’s an example. So how you could like, visualize Yeah. So we’ve got all of these we’ve got all of these accounts and we train on one set of accounts.

But if we were to buy another company and get their accounts, now the shape of the data has changed.

Luke: Yeah, yeah,

James Massa: definitely, definitely. And maybe, and maybe our model will behave differently and we’ll have to retrain. So it’s very important that we are doing something called model monitoring. And that’s part of your ML lops and so on.

So monitoring the results. We should have some KPIs that we’re expecting and we should be monitoring to ensure that we’re always hitting those KPIs. And if we, if we drop below some threshold, then we want to have an alert that basically means retrain the model. Ideally model.

Luke: And I would imagine too, you, you have to stay really tuned into, you know, the navigating kind of the regulatory space and the latest on, on that front regionally too, right?

Like, how is navigating that with emerging tech? I mean, because you’ve got, obviously you’ve got like this background in, in crypto and the blockchain [00:17:00] side, which has been kind of a regulatory. Nightmare to navigate the past, you know, x amount of time. But, but on, on the AI side too, it’s, it’s really emerging.

As far as like people’s awareness of it and, and hitting the rubber, hitting the road, so to speak. How is that from the compliance side? You, you spend a lot of time educating different parts of the company around this from your point of view, like how is it navigating throughout that?

James Massa: Well, well, there’s a, a number of different things to do. One is to go through many steps of governance mm-hmm. And have many pairs of eyes reviewing trained people to review anything that you put in production. Another thing to do is minimize what you put in production. Right? Right. So that sounds funny at first, but you can imagine like when you first get started, typically with rolling out AI in your company and doing a lot of internal build and so forth, you might have everybody having at it.

And then you start to get duplication, for example, over time, where 309,000 people work at JP Morgan. So you can imagine that, wow, not everybody knows each other. Not everybody knows every project that everybody’s working on, right? So, one of the challenges is [00:18:00] we set up a, actually its own set of governance, is to have an inventory of all the models that are there so that we know what’s there and start to reduce the footprint of what’s there.

It’s very helpful if you have one document processing application versus five or 10, they’re all doing the same thing, and therefore you could have five or 10 times as many errors. Right,

Luke: right, right. Sure. Or, or,

James Massa: you know, or issues. Right. Yeah. Minimize the, the, that minimizes your regulatory footprint five or 10 times.

What, what I just shared, right?

Luke: Totally, totally. Yeah. Different policies for each and, and whatnot. Yeah. And like, and, and you know, potential for things to leak or whatever, and.

James Massa: It works just like anything else. You know, you have the lines of compliance, defense and so forth.

Luke: Mm-hmm.

James Massa: Mm-hmm. And, and governance before anything goes forward, there’s a lot of different governance depending on what you’re doing.

For example, they have different determinations of whether this is called an analytical tool, whether this is an LLM or whether. This [00:19:00] LLM is sending data outside the firm. Mm-hmm. Mm-hmm. Or whether it’s been running locally within JP Morgan. Whether this LLM uses data in different region, you know, the regional aspect is very important.

Every country has its own rules. Mm-hmm. So that’s incredibly critical and it’s very challenging to understand. Sometimes I find for myself the path of least resistance is to deploy the LLM only in the us. Got it in the first go, right? Like, why? Yeah. Figure out what’s going on in Country X and make a mistake for potentially less game.

Luke: Yeah. Or, or something’s like not really clear, right? And, and you know, better to be safe. And until you get that clarity so to speak, it’s always

James Massa: better to be safe. The overall, the overarching thing is When in doubt, be safer.

Luke: Yeah.

James Massa: No, that’s great. There’s more to think of than my application that’s going live.

Right. There’s a lot more to think of than my feature the entire JP Morgan. Right?

Luke: it’s smart that you’re kind of presenting it that way too. ‘cause it, there is a lot more kind of, I. [00:20:00] Thought, mindfulness, you kinda have to have with these things, given the types of engagement that you have with them, right.

And, and the different types of data that are going in and out of them and, and all that. So, no, that, that’s, that’s great. A lot of talk around kind of, prompt engineering and, and different types of, like, ways that humans are gonna be interacting with these tools. How do you see the role kind of, of human experts evolving as AI becomes more prevalent in finance and just kind of in, in the general work life?

James Massa: The way things are going it seems is, is that humans will always be in the loop.

And the humans, though, I think of them more as being managers now. So everybody’s gonna be a manager. No more individual contributors. Everybody’s gonna be a manager of a team of LLM agents is my. Reading of the tea leaves.

Luke: Yeah. Yeah, yeah. That’s a good, that’s a good bit. And that vein too, I mean, um, you, you mentioned agent like gen is definitely getting a lot more attention now. Are you seeing applications of gene AR that are really interesting in your world that right now? Or is it still something people are kind of just testing and, and learning with of [00:21:00] gen ai?

Yeah. Yeah, absolutely. I mean, like, like, like, like agents being used in addition to like,

James Massa: yeah. For sure. So the, the, the way things are, are moving in the, in the agents, I would say is both in the chat bot say, and then also in software engineering. I, I see a lot of discussion and, and progress of moving towards end-to-end solutions that are connected.

So take software since this is a, the technology show. Yeah, the brave technologist, right? So if you’re a brave technologist, you, you may, you may work with the SDLC software development lifecycle and there you’re gonna have requirements and build and test and deploy and so forth. So, and operate. So throughout this lifecycle, you can have an agent for each section of the lifecycle.

That does that, that bit, you can have one agent that makes the requirements, another agent that verifies that the requirements are good. Another agent that turns the requirements into test cases, another agent that takes those test cases and [00:22:00] turns them into code that passes those test cases and so on and so forth.

Right? Like that. At the end of that, you have. One person who’s managing it, and if it gets stuck anywhere along the line generating the code, say there’s a problem generating the code, the code doesn’t compile, or that you raise a PR and the person feels that it’s got a security flow in it. So it rejects the pr, you know, the, there can be like humans in there checking things and rerunning a stage.

But I’m thinking of it going from step to step to step to step. It’s like an assembly line. And the agents are the workers on the assembly line doing their bit. At each step of the SDLC and the human foreman can jump in and like stop the assembly line when it goes wrong, or restart it or give some advice or that, that sort of thing that you, you would imagine that’s, that’s how it can work in software engineering.

Mm-hmm. Mm-hmm. Chatbot would, would you like to talk about chatbot?

Luke: Yeah. Yeah. Let’s go for it. I mean, yeah. Let’s dig into that side of it too. Chat bot

James Massa: might work like this with chat bot. By the way, chat bot, I would say does work like this at [00:23:00] many of the big chat botts that we we know online, right?

Yeah. Yeah. So how, how are they working? It is, first there’s an orchestrating LLM, so that’s the first agent. It’s the, it’s the person who take just like a person, the operator. It’s the operator. So they, that orchestrator, they take your, your request or your question. And these days, like a, a question is the, it can be sort of end to end with these agents, right?

It’ll say, can you get me a ticket on an, on an airline or something? And it’ll say, yes, I already know this and that about you. Would you like to leave on this time and that time? And you say yes. And it says, okay, I’ll, and then an agent will go and like, actually, actually do the work of getting you the ticket, charging you, sending you a bill.

Giving you the information, you all do all those different things that might have been a few different steps previously. Mm-hmm. Of different programs, maybe with humans in between and so forth. The Ag agentic architecture is this idea of, from my perspective, of going end to end and combining multiple steps that perhaps used to [00:24:00] be individual steps with human in between.

Luke: It sounds like it could be like, you know, a huge benefit, especially on the finance side, you know, with people and everything from, you know, managing budgets to trying to kind of look at like. What different options. I mean, the markets are so big now, right? Like, there’s so many different options out there and, even these new ones, right?

It is interesting with your background too, in the blockchain space. I mean, you’ve seen how difficult it can be for adoption. So it seems like some of these things could be, I. Very helpful for, you know, getting additional, broader adoption and, and educating people along the way and, and helping them out with the parts of the process too, I would imagine.

How are you seeing kind of, uh, the next days of crypto and AI converging from your point of view? It can be anecdotal or whatever.

James Massa: the biggest thing is just the regulations around both crypto. And AI have been less firm. That’s one of the reasons why it’s so risky to turn out either one.

Mm-hmm. You don’t know where you’ll end up later. Right. And to invest in either one and spend years working on either one and put [00:25:00] money into it either way. Right. And why we wanna minimize what we’re doing. Mm-hmm. I think once the rules of the road around both are becoming, become more clear, then, then we’ll be better off.

Right. And, and that will be, like we mentioned one thing about the global consistency, you know? Mm-hmm. There’s some level of global consistency about basic things like I know money laundering, right? I know that there’s some rural countries, but you know, to a large degree something like money laundering is well understood in similar regulations around the world.

Or in, in countries that are close to the US and we, we want the same for our technology, right? Which by the way, is going to be governing our finance, and so forth, right. And our money. So because of that, it, it’s very good to have similar rules right now, I would say as clearly, you know, the, the privacy rules and so forth in Europe are a lot different.

The AI rules are less clear. Some things are less friendly to business too. You, you know, just, yeah. Having any rules at all is the most friendly to business. Like consistency is the most friendly to business, but also some rules are [00:26:00] sort of there to say like, we don’t care if you make profits, you know, or we don’t care.

Some, some rules are just kind of like extra things hoops to jump through. You could call it.

Luke: Sure, sure, sure. No, and I think, I think that’s a really good point about the global consistency, right? Because it’s not like this is starting from zero either, right? Like the World Wild Web has been around for a long time.

People have been using protocols globally. These things, you know, there’s examples out there. And I think that, yeah, like, it is interesting. The rules of the road are, are good to have and, there’s a lot of commonality too, from thing to thing and yeah. that, that’s super, super interesting.

For people, whether they’re in businesses or kind of getting into the space themselves. Do you give any advice for folks that are, you know, coming into the space around, you know, adopting AI technology or any advice for, for those people that are in organizations I know we talked trust earlier and things like that, but any other advice you might have for folks that are kind of maybe mandated to do things with AI now and, and, and are trying to kind of find their way?

Sure.

James Massa: Well, one of the first things that many teams and many organizations I think are, are grappling with, I. is along the lines of, is [00:27:00] this transformation, AI transformation we’re gonna go through? Is this something that we’re, that’s an emergency? And because we’re gonna be disrupted by other AI players and it, you know, we feel that sort of level of pressure that the, that the risk of AI is less than the risk of being disrupted by ai right There.

There’s that versus another aspect would be, is another lens that you folks put on it. Is they say AI is just another technology. Before this, it was cloud. Before that it was data strategy. This is that and the other thing, this is just yet another technology and we have to look at the business case for using this technology.

We have to do a cost benefit analysis of every single rollout. Right. And things should be completely controlled that way. And that, of course will have a tendency to stifle some level of innovation. Mm-hmm. Right? Mm-hmm. Versus other folks who are like, we, we better innovate. We gotta be there fast. We need to be at the forefront.

So weighing those two aspects and understanding if we’re gonna be innovating grass from the bottom up and, and let [00:28:00] everybody on the, everybody every everywhere have some level of freedom to innovate and innovate quickly, or it’s gonna be much more top down driven. We’re gonna invest in a few big bets and we’re gonna keep things controlled and maybe make a smaller footprint to reduce risk, maybe make a smaller footprint to reduce costs in some way.

Make sure that we’re not doing duplication. You know, if you do duplication, you’ll, you’ll get more innovation and more things will pop faster, right? There’ll be some level of competition even in the company. Best things will come up. But you know, there’s some duplication of cost as well, right? Sure, sure.

Duplication of risk as well. Yeah. These are things that have go through the organization’s head in order to decide, you know, where they’re going. And I, I think there’s a continuum and people probably start someplace in the continuum and move towards the center. You know, usually in my ex anecdotal experience, I find, you know, some companies are, are starting out in the very conservative or they’re very.

Aggressive and then they move towards the [00:29:00] middle over time.

Luke: there’s definitely like, you know, around the hype cycle around this too, where it’s like, okay, every, all the executives are coming in. Oh, let’s get the AI stuff everywhere. And everyone’s like, well, where’s the fit?

And so trying to kind of find that fit, right? Like, and I think that’s really, really really good, good approach. You’ve been super gracious with your time today. And is there anything we didn’t cover? That you want to let our audience know about or, or wanna put some light on while we got you?

James Massa: The next big thing is quantum, so let’s look out for that.

Luke: Yeah, yeah. Okay, cool. Awesome, awesome. Where can people find your work online or, reach out to say hi? Hey,

James Massa: you can find me on LinkedIn, so that would be really great, James Massa.

Luke: Excellent. Well, James, I really appreciate you, you coming on the show and, and and sharing your insight with us.

I, I know our audience probably learned a lot and I’d love to have you back too sometime to kind of, check back in on things and and see how things are going. Oh, that’d be great, Luca. I had such a wonderful time. Thanks for having me. Awesome, thanks. Thanks for listening to the Brave Technologist Podcast.

To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the [00:30:00] Brave Browser, you can download it for free today@brave.com and start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads trackers and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • How he’s succeeded in applying trust scores to blockchain and DeFi projects by analyzing code vulnerabilities, suspicious transactions, and social media sentiment
  • Using retrieval augmented generation (RAG) to ensure chatbots provide only pre-approved responses
  • Balancing precision (always giving correct results) with recall (not missing any compliance issues)
  • The importance of trust in AI, particularly in the context of blockchain and decentralized finance
  • A practical roadmap for organizations navigating the AI revolution, bridging the gap between innovation and responsible governance

Guest List

The amazing cast and crew:

  • James Massa - Senior Executive Director of Engineering and Architecture at JPMorgan Chase

    James Massa is the Senior Executive Director of Engineering and Architecture at JPMorganChase. He holds six patents covering subjects such as AI Data Quality, cloud cost management, multi-teacher LLM distillation, and model self-healing. In 2024, James led a team that won the FSTech award for Best Financial Services IT Team, migrated 53 apps to AWS, published two IEEE papers on LLM blockchain security, and presented 14 keynotes, including one at UCSD.

    He holds a master’s degree from the computer science department of Harvard University, and a master’s degree in finance from the City University of New York.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.