Back to episodes

Episode 108

When AI and Enterprise Tech Feel Like Magic [Live from AI Summit]

Kapil Gupta, Enterprise AI Product & Platform Leader at Cigna, shares insights from more than two decades of turning cutting-edge technology into enterprise-ready products. He unpacks the difference between generative AI and agentic AI, and why governance, user choice, and thoughtful design matter just as much as innovation. Learn how enterprises can scale responsibly and why the best technology often feels invisible to the people using it.

Transcript

You’re listening to a new episode of The Brave Technologist, this one features Kapil Gupta. Kapil works for the Cigna Group as a senior director on their generative AI project. Kapil specializes in leveraging emerging technologies to solve complex business problems at scale, and is driven by a focus on crafting AI driven product experiences that ensure high adoption.

He balances a high level strategic vision with a passion for staying hands-on, often vibe coding prototypes to prove out new concepts. In this episode, we discussed. How generative AI differs from agenic AI, the power of questioning and probing AI outputs. The ways enterprise AI can integrate with existing workflows to drive adoption without disrupting users and what the future of AI looks like from invisible software to personal assistance that feel like magic.

And now for this week’s episode of the Brave Technologist.

Luke: Kail, welcome to the Brave Technologist.

How are you doing? It’s a [00:01:00] pleasure to be here. Thanks, Tony. Yeah, thank you. Appreciate you. You coming on. I’m excited for this one. So, you were on stage, at the AI summit. What did you hope listeners would leave feeling or thinking about differently? I’ll talk about two different aspects.

I think we had a really good audience. They were very, appreciative. We got, you know, pretty fulfill room there. I’ll talk about the enterprise perspective and perhaps the user perspective. Sure, sure. I think there’s a lot of discussion about. The wonderful things that gene I can do.

We see it in our daily lives. We experience it at work. We’ve, there’s obviously a lot of talk about it here at the summit. I think one thing that I hope people took away from that is that while there’s a lot of promise and a lot of great things we can do with Gene, I for us to be able to trust it you know, you have to have the right type of governance around it and to, to make sure it.

Works properly in an enterprise context. I will say from a user perspective I hopefully users, you know, folks in the room came away with a feeling that while there’s a lot of discussion about again, what AI can [00:02:00] do and there’s, from a user perspective, there’s almost perhaps a sense that AI is something that gets done or gets used.

And, you know, it happens to me, but I think from a user perspective, there’s. I think a lot of opportunity for users to make their own choices uhhuh about how they use AI and when to trust it uhhuh and kind of use it in ways that make the most sense for themselves. Yeah, I think it makes sense.

I think those are good points too, especially when trust, really, there is a lot of kind of hype and a lot of like, you know, magical thinking. You know, the trust is super important.

your talk explored the partnership between EGEN and generative ai. From the perspective of an everyday user, what does that partnership actually unlock?

So, you know, generative AI is very good at cognitive tasks, right? Tasks, which involve things like drafting a document, summarizing something and so on. And from a user perspective, while that’s very useful it’s sometimes hard to, get a sense a tangible sense [00:03:00] for what the value was just to save time from doing it.

But in contrast with agent ai, which is focused on workflows and tasks and outcomes for a user that’s much more tangible. Yeah. Because the agent did something for you. Uhhuh you know, the book provides values, but you know, the agent take agents provide a lot more tangible value Yeah. For users to feel like, oh, it did this.

Task or set of tasks for me. And even better if the agents can do things that the user doesn’t like doing. Yeah. Yeah. Right. What, what are one of the biggest misconceptions aroundI would say that a lot of people sometimes just assume that when they ask a question and they get it back and it looks right, that it must be right.

And it might even actually be right. I’m not the. About the verification part of it. But there’s an element of you know, the way you use general ai, which is very interesting, which is you can actually have, in a sense, you can push back. Mm-hmm. You can challenge, yeah. You can [00:04:00] question that.

So I think a lot of users don’t realize until maybe they stumble on it or until somebody tells them that you can actually push back and even though it looks right and say, but no, that’s not what I meant. Or, I get it, but something right. And very often you end up having unlocking much more value from questioning and probing and, challenging the AI than if you just took the answer because it seemed right.

Yeah, that’s a great point. Yeah. I think it’s like one of those real hacks with, uh, figuring out prompting is that you can kind of like shape the the, the dialogue, you know? Right. And get, get the answer. You’re kind of looking for you know, by, by pushing back a little bit. Correct. And the first answer might have been quote Good.

Correct. It’s not that, it’s not always, I mean obviously if it’s wrong then you definitely wanna push back, but even when the answer is quote unquote correct, but it’s not fully aligned with the context in which you are trying to use general ai. Yeah, yeah. No, that’s a great point too. You can kind of, there’s no nuance and way you can kind shape the conversation ness.

Yeah, [00:05:00] no, they’re totally right on. Yeah. So you kind of spent two decades really like, turning, cutting edge at tech into products that real people can use. What’s the moment you realized. AI kind of ready for scale inside a large enterprise. So if you talk about AI at scale, so I’ll, I’ll talk about actually two moments.

Yeah, that’d great. Great. One, really from a technology perspective, again, one from a user perspective, from a technology perspective, actually, this actually dates back to something 10 years ago. Oh, wow. So it was perhaps in the earlier days of machine learning and I was at JP Morgan Chase at the time.

We’d had a leader recently come over from another organization, another financial services firm, and he was talking about. Out using machine learning models at the time, which is, you know, effectively still ai. It’s not generative ai, it’s not agent, but it’s still ai. I’m talking about using AI models to try and match marketing offers to customers Uhhuh.

And they were describing, you know, what, you know, how they were doing it, or at least what the problem space was like. [00:06:00] And what he described was essentially they were looking. At in that particular instance, 2000 attributes for every type of customer. Uhhuh you know, what type of preferences a person has.

Do they like to, where do they like to travel? What kind of things they like to buy. So, really detailed customer profiles with up to 2000 attributes, but doing this for 20 to 30 million customers. Wow. So, you know, back then, I’d say 10 years ago, if I was thinking of that as a database and thinking about, you know, what sizes.

Database, by then we already had large databases, right? So the size of that wasn’t striking, but the aspect of having models, which would try to assimilate the full set of data and extrapolate from that some types of, you know, predictions that they could use to target orders for me, that was the part that surprised me because even before then, we’ve had probably 15, 20 years, very, very large databases for a long time, but usually in those.

Databases you are going into it and looking for a piece of data and extracting it, right? Or looking across multiple datas, looking, you [00:07:00] know, joining them, pulling out a specific piece of insight. This was an element of looking at a large data set in total and as a whole, drawing insights of that. So for me, from a technology perspective, it was my first thought of like, wow, we are now at a point where we can operate on those sets of datas as a whole at scale.

I will say though, at a, at a human level, there was even earlier this year at Cigna, we were building a custom internal chat assistant And I was talking to users as we were kind of building it and refining it. And there was a user who said that you know, at the time we were allowing users to export the responses as CSV separated files, but not as Excel.

And so the user had been trying to ask. Assistant, I need an export and Excel. And the assistant said I can’t do Excel, but I can give you a CSV. Yeah. And you can take that and put it into a file.

for me it unlocked that aspect of. the technology being able to act in unexpected ways within [00:08:00] certain you know, approved its norms. Yeah. And kind of extrapolate that out into a scale.

It gave me a sense that yes, we can actually put this out in front of a larger, you know, set of users in the enterprise. Know that it lacked in ways that are still safe. Yeah. And kind of controlled. Yeah. And it kind of gave ’em a solution, right? Yeah, exactly. Which is really interesting. That’s a really interesting example.

If you could set kind of a new benchmark for what makes an enterprise AI product truly great, what features or values would you prioritize? So I guess for me it comes down to a few things. The first, I guess I would say is that I, I love to build products where we’ve kind of mastered the complexity Yeah.

Where the way it works is kind of hidden under covers. And so for a user it feels like, oh, this is like magic. Yeah. Right. Yeah. I think there’s another aspect from enterprise perspective that becomes really important, which is to listen to users very carefully when we are building these products because.

Users will often describe a solution [00:09:00] instead of a problem, right? They tell you, I want it to work this way. And very often the underlying problem is something different, right? Right. And so you have to probe a little bit to make sure that you’ve understood the core underlying need and kind of make sure you kind of, solve for that.

And I guess to me those are the elements that kind of really come to mind when you are, driving, because they’re both in. Evolve really listening very carefully to users as well as paying a lot of attention to exactly what you’re trying to build and doing it in a way that surprises the users.

Yeah. And kind of, helps them in the end, you know, achieve what they want to do. Yeah. And they’re the ones using, right? Like, I think like Yahoo gets lost a lot where you know, especially with AI where it’s such a top down mandate to like put it in all these things, but not enough people are actually like kind of talking to.

Find that market fit, right? Like, okay, here’s how I’m actually gonna use the thing. Yeah. And, and it’s such a, a good point that you, touched on, so I think uh, you’ve seen firsthand how enterprises kind of balance innovation and compliance, right? Like, obviously it’s like it’s gotta be top of mind. What’s [00:10:00] a strategy, your mindset that helps teams move quickly without introducing unnecessary risks?

So there’s a, there’s a couple of ways that I’ve seen, you know, many of the companies I’ve worked at even at, at Cigna, other places, one is to havetypically, what they call like a sort of internal lab. Mm-hmm. Right? where you can do experimentation in a controlled environment gives you a chance to kind of push the envelope, look for you know, emerging technologies.

Sometimes when you are doing that experimentation, you don’t quite know how it’s going to work, whether it’s gonna work in a controlled fashion, whether it’s going to, you know, meet your needs. So I think there’s an element of having the type of, controlled lab environment. I think the other element of this is to again I think involve users as much as you can.

I think the second thing I’m gonna mention doesn’t always work in every company, depends on the kind of employee, well, depends on the kind of business you have, types of employees you know. The kind of work they do. But one other aspect that’s been very interesting that I’ve observed is the [00:11:00] ability to set up, let’s say a program where you can provide early access of experimental tools Yeah.

To users and employees who have let’s say a different threshold for, you know, for the kind of support they’re gonna get. All for the stability. Your software, right? Yeah. Because within large enterprises, you, you, you’ll always have typically I would say a large segment of employees who aren’t necessarily ready for software that isn’t working right.

And they need to work in a certain way, and they need to function every day because they need to get their jobs done on a certain schedule. But you do, you do have often a smaller proportion maybe of your employees who are able to deal with. Parametal software. Yeah. And that gives you a chance and a mechanism to kind of roll out, let’s say beta software.

Yeah, yeah. Like a lead, lead user, lead user base, kind of. Yeah. Yeah. And, and in terms of balancing it out with governance, right. I think the more that we can big [00:12:00] governance into the products that are built through some types of you know, workflows and engines that check for, certain types of rules, compliance, almost the notion of what they call you know, governance as code. You know, those are elements that kind of help you navigate how to build and deploy technologies in a sort of safe and controlled way. Oh, that’s great. Yeah. what are some of the biggest challenges that organizations face when integrating generative and genic ai?

So, I think a common problem that I’ve seen again and again, certainly large enterprises has been. One is the aspect of legacy systems uhhuh that you know, are, aren’t ready for use with you know, they don’t have interfaces or mechanisms to, you know, to interact with them very easily.

I think the other element that’s tends to be very prevalent, particularly in large enterprises is disconnected data uhhuh, and you have data that’s sitting in different systems all across the enterprise. They, they don’t really have a maximum to top. Each other or for you to [00:13:00] reconcile the data and kind of make it into, you know, convert it into a single layer.

And what happens is that gen AI is moving so fast and for a lot of companies that or especially leaders, you hear about it and tell their teams, oh, we’ve gotta jump on this. We gotta do it right now. We gotta find out how to embed gene AI in our products. The intent is, I think, is good.

And, and I think it’s right, but I think the reality is that for you to be able to leverage gen ai, you almost need to have as a prerequisite data and systems that talk to each other so that you can layer gen AI on top of that. Yeah. And I think a lot of companies find out that when they go to inter implement their gen AI and agent systems that before.

They can start doing that, they have to then go in and invest some more effort. Yeah. In kind of bringing these two together Yeah. In ways that you can actually use it. Cleaning a technical debt and that kind of thing. Exactly. Yeah. Yeah. Yeah. That’s interesting. Like getting a big company to adopt a new [00:14:00] platforms sometimes harder than building a platform itself.

Right. What do you find most effective in driving adoption in a large enterprise? So, I would say the thing I’ve experienced, a lot or a couple of things. One is and I’ve told all my team, you know, whenever I can, I’m working my team to build a new platform or to replace something, I tell them a couple of things.

One is that it’s really important for us to try and build platforms that integrate or complement users’ existing workflows. Yeah. What I mean by that is if the. Employee, or the user has a certain way of doing their job today and involves doing tasks in a certain order, A, B, C and you tell them that I’m building this new platform.

But what you’re gonna have to do is you’re gonna have to change the order in which you do things because the platform’s built to do things differently. Yeah. The challenge becomes then you, you, you putting the owners of them to make that shift. Yeah. And, and so it becomes really important to build out the workflows of the platform so that they actually [00:15:00] complement the way.

In which those users work today. Yeah. And so for them, the bar is lower, right? they still work the way they do today, but they’re able to leverage the technology and they just have to kind of, it helps them kind of get their work done. Yeah. I think the, the second thing that I emphasize a lot is that back, especially in this context, when you’re trying to make users change their behavior, that if you are asking them to do something different from the way they do today.

It becomes really important to offer them in terms of value, something that they don’t have today. Right. Or something that addresses a pain point that they have today. Yeah. Because for a user, it’s very it’s very hard to incentivize a user and tell them, I have a new tool or a new platform. but it’s gonna do exactly the same things as the old tool did.

Right, right, right, right. And it might be new. Stack. It might be, you know, technology, but from a user perspective, they’re gonna look at it and say, I know this is [00:16:00] fancier and looks prettier, but if it’s doing the same thing, I’d much rather just keep doing what I’m doing now. I want Yeah. I, why should I?

And so you have to offer them something of value that’s not just that it’s new. Yeah. And that it’s shinier. Yeah. But you know, in the old system, you had this problem or you had this point in which you had to do certain things. Self. Yeah. Yeah. And that gets solved Here. I can save you this much time. You have to give them, you have to offer them something of value so they feel that the shift is worth it totally.

For them to learn and take the effort to move to a new platform. That’s a great point. Yeah, I think it’s a really good point. I mean, I think there’s also kind of this race to innovate, like that’s happening right now. what conversation are we not having with both? So I would say in the context of gen I, there’s a lot.

Of talk about, we touched on earlier about trust and verification, validation. Everywhere you go, there’s a lot of talk about evals and how you use evals to you know, ensure that [00:17:00] you are not just designing your generic products correctly, but also running them and making sure they’re running the same way.

What’s, I think missing that conversation, or at least not getting enough attention right now, is the cost of running those evals, because those evals actually. Tend to be Gene I calls themselves. Yeah. And very often can be as much or more expensive to run those evals than the cost of the gene I application itself.

Yeah. And I think as more organizations and enterprise start running evals at scale they’ll, I think catch up to that reality that there’s a cost there that needs to be factored in because it has value. Yeah. But you have to be able, you have to factor that in in the beginning, so you’re not surprised.

When you can go run those. Yeah, that’s a great point. That’s a great point. I have a couple rapid fire questions. Genic or generative ai, which do you instinctively trust you? I would say generative ai partly because, or mostly because since you’re asking about trust, it’s easier to verify those outcomes.

Yeah, right. You have a certain output that come and you can [00:18:00] sometimes ask the LLM to check itself. You can go yourself and look up look up different sources to try and make sure that it’s correct. I think with the agent tech, that’s you can do it, but you know, that is a little harder because it’s a multi-stage process and so verification takes a lot more effort.

Yeah, a lot more time. Yeah. In terms of how you do it. Yeah. What’s the best compliment someone can give your product? So, I’ll say this is two different ways. One way it doesn’t quite capture in text, but it’s like the chef’s case. Yeah, yeah, yeah. You know, someone telling you that it had that.

You know, a feeling for them. I think the written version of that would be a user saying it felt like magic. Yes, yes. Right. And saying, oh, wow. I, I didn’t know how, you know, how it works, but it felt like magic. Yeah. Yeah. And I think to me personally, I think that’s what I kind of live for, that that’s the, you know, that’s what keeps me going.

That’s the win, right? Yeah. That’s a win. What’s uh, one bold prediction you have for the future of tech that you willing to bet on today? This is a tough one. I guess [00:19:00] if I had to try to think of something that’s beyond, I think, well, what we see, I think we see a lot of our exchange, our interactions with AI changing from you know, the written form to let’s say voice.

Yeah. And so on. So in terms of something, if I try to. Triple that out in the future. And I think of, you know, someplace beyond where I would naturally normally think, I think, one thing we’ll see is software becoming invisible. Yeah. And what I mean by that is that we’ve spent I don’t know, maybe 20, 30, 40 years or more, right, teaching users to use products in a certain way.

You click on this button, you choose this menu option. You fill out this form and they all. Involve or, or say this as a prompt or a, or a you know, spoken prompt. But they all involve training users to use software in a certain way. And I think when I say software becomes invisible, what I mean is [00:20:00] that you know, imagine a world where you have some type of personal, you know, agent assistant that is interacting with you always interacting with you in, in whatever way.

is most useful in that context. It might be voice, it might be some type of, hand or let’s say facial indication, but different forms of interaction. Yeah. Where the agent is, is translating what you want into the, the goal and the outcome you want to achieve.

Yeah. And potentially spitting up the required interface if further interactions required. That’s kind of. Ephemeral where it kind of spins up maybe a UI and says, oh, enter something here. Yeah. Right. Or, or do this, or select something here. And then those things go away and you are just interacting with an assistant that is kind of continuously interacting with you and then taking the actions on your behalf to achieve the outcome that you need.

Awesome. Yeah. Like a Jarvis kind of, kind of exactly like a kind of Jarvis kind of, interface where [00:21:00] there is software there in the end, but we don’t see it. Yeah. Yeah. No, I think that’s great. Thank you again so much For coming by for this conversation.

Really appreciate it. I think our audience will like it too. I enjoyed being here as well. Thank you for having me. Excellent. Thanks.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • The tangible difference between generative AI and agent-based workflows
  • Why adoption depends on fitting into existing workflows, rather than forcing behavior change
  • The challenge of legacy systems and disconnected data
  • How companies can innovate quickly without introducing unnecessary risk
  • How pushing back, probing, and questioning AI can unlock more value
  • Why listening to users matters more than building flashy features

Guest List

The amazing cast and crew:

  • Kapil Gupta - Former Enterprise AI Product & Platform Leader at Cigna

    Kapil Gupta is an executive product leader specializing in leveraging emerging technologies to solve complex business problems at scale. As a leader of AI product and platform teams at Cigna and previously at industry leaders like Capital One, Deloitte, and IBM, he has turned breakthrough innovations like Generative AI into practical enterprise solutions.

    Kapil is driven by a focus on crafting AI-driven product experiences that solve real problems and ensure high adoption, bridging the gap between sophisticated technology and business value. He balances high-level strategic vision with a passion for staying hands-on, often vibe coding prototypes to prove out new concepts. Kapil holds an MS in Computer Science and an MBA from NYU Stern. He shares his work at kapilgupta.me and lives in New York.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.