Back to episodes

Episode 62

LIVE FROM AI SUMMIT: Lenovo is Shaping AI for Smart Cities and the Greater Good

Dr. Jeff Esposito, Engineering Lead at Lenovo R&D, shares how his team is shaping the future of AI with innovations like the Hive Transformer and EdgeGuard. He emphasizes the importance of ethical innovation and building technologies that are intended to serve society’s greater good. He also stresses the value of collective contributions and diverse perspectives in shaping a future where technology effectively addresses real-world challenges.

Transcript

Luke: [00:00:00] You’re listening to a new episode of The Brave Technologist, and this one features Dr.

Jeff Esposito. Who’s an engineering lead at Lenovo and has over 40 patent submissions in generative ai. For this year alone, Jeff had a long background in research and development at Dell and Microsoft before coming to Lenovo.

He lectures on advanced technological development at various US government research labs and believes that technology is at its best when serving the greater good in social justice. In this episode, we discussed. How Lenovo is shaping the future of AI with innovations like the hive, transformer and edge guard, the impact of quantum computing and neuromorphic chips on AI’s evolution, AI’s role in leap building smarter cities through Lenovo’s, collaboration with NVIDIA and other partners, and why ethical AI matters and how technology must serve society’s greater good.

Now for this week’s episode of the Brave Technologist.

Jeff, welcome to Brave Technologies, man. How are you doing? Great, Luke. Thanks so

Speaker 2: much for having me. It’s

Speaker: a hoot to be here. I’m glad to have you [00:01:00] here, too. We’re here at the AI Summit in New York. The very

Speaker 2: noisy AI Summit

Speaker: here in New York. Indeed, indeed it is. A lot of buzzing around. I know you’re speaking at the conference.

you want to share a bit with the audience about what you’re talking about? Oh, sure.

Speaker 2: We’re going to talk about AI futures, which is a nice way of saying Equal parts what we’d love to be and what we are. And the only way to get to the future, if I can paraphrase and abuse poor Descartes one more time, is that our anticipation and our belief in a result allows us to scaffold and build to that result.

So I think that’s kind of the core idea I want to get across to people, that it isn’t some mystical Jules Verne or Star Trek journey. It’s simply saying, where are we and where do we want to go with what we have?

Speaker: Love it. I love it. Oh, thank you. Can’t wait to get more deep into this too. When we think about things like, uh, next gen infrastructure for the democratization of the ai, what does that mean?

Like, I mean, when you’re, when you’re thinking about like, kind of breaking this down for, you know, people that are new to this space,

Speaker 2: everybody likes to throw the world around autonomous, right. And self, uh, managing. But we actually do have that now. code [00:02:00] name vna, uh, visual Insight Network for ai and what I called Edge Guard and the lawyers made me stop calling and now call Edge guarding

what’s cool about them is that they’re actually able. To work on their own if they lose connection to the greater network. Okay, so when we talk about self sufficient functionality There it is. Yeah, right So what it does is it does its best each of these technologies combine machine learning and symbolic logic To do the best they can with the information at hand and the retained history Mm hmm, so it combines the best of data driven decision making with trended history.

Speaker: Okay. Okay. Excellent. Excellent how do you see this becoming more accessible, through new barriers, uh, to smaller players in this space?

Speaker 2: Well, I think this, you know, your podcast, Luke, is an excellent vehicle for this. Because, you know, everybody likes to talk about how patenting things is really supposed to be about educating the public.

Yeah. And raising the standard of practice in engineering. But I really think good old conversations like this are a great way to get the [00:03:00] word out and to make it really accessible.

Speaker: I agree.

Speaker 2: Because by talking about it, it becomes that much more accessible. Accessible as opposed to high read patent.

Speaker: 17715.

Oh, do I have to? Right, right. Well, and I think to like, especially when in the context of like, you know, you’re at Lenovo, Lenovo is huge. It’s just a staple, especially engineering. And you know, the whole development community. It’s a

Speaker 2: wonderful company. And am so proud of what we’re doing to make the difference to be relevant to the customers and to not just draw on a strong history of innovation, but also to be heard.

In all in on what we can do with AI innovatively, but more importantly, relevantly to the customer. ‘cause it’s not just tech for tech’s sake.

Speaker: Right.

Speaker 2: If we can’t produce the right results for our customers, then we’re not really where we need to be. Yeah. And I don’t care if it’s AI or good old COBOL and punch cards,

You gotta be relevant.

Speaker: Yeah. Yeah. No, that makes, makes sense. Well, um, I think, uh, let’s dive in a little bit onto these, uh, [00:04:00] like. Neuromorphic chips and quantum algorithms are cutting edge concepts, right? Like, how close are we to seeing those in mainstream applications, like these technologies? Like, or maybe we can unpack what they are first, for some folks that are not necessarily Okay, so when we talk

Speaker 2: about quantum computing and neuromorphic chips Yeah.

We’re talking about those things which adapt neural networks, and neural networks are nodes or associations and circuits. And when we talk about quantum Everybody likes to say quantum. Yeah, it’s such a buzzword. It’s such a buzzword, and I loved in the Ant Man movie at Marvel, the guy goes, Do you guys just put the word quantum in front of everything?

But it’s a, it’s a classic physics experiment that’s so exciting, and having done a post doctoral fellowship in quantum computing, I can tell you that we’re already there. And the whole idea, in my case I focused on, Oh God. Forgive me for getting all biomedical buzzy. No,

Speaker: let’s get in.

Speaker 2: Alrighty then, uh, Pediatric Oncology for Genome Research.

Yeah. A marvelous and wonderful thing that boils down to how do we take the data we have, crunch it better, faster, and truer, and be able [00:05:00] to help children, if not outright avoid having cancer, make sure that we’re better prepared to give them the right treatment as it begins to emerge. So to me, that’s a beautiful balance of technology and relevance to society’s greater good.

And I’m going to sound terribly naive when I say this, Luke, but I deeply believe that technology must serve society’s greater good or it’s missing its purpose. So neuromorphic chips, quantum technology, it’s there, okay? But it’s all about what is the application we need, right? So when you talk about AI and hybrid AI, what you’re really talking about is reaching into a toolkit.

And getting the right wrench, the right pliers, and the right hammer for the job. Right. Right? Right. So that’s really, really what, we want to talk about. The wonderful thing about most engineers I know, and, is that, yes, necessity is the mother of invention. But I also like to say that if necessity is the mother of invention, then desperation is a wonderful midwife.

Because you’re getting that baby out now. I love [00:06:00] it. So, so the application loop of quantum technologies, neuromorphic chips, and more, is happening. And not just in the lab, okay? But it’s got to do with what, what are we solving? What are we, what are we trying to do to better the world? Right, right.

That’s what drives what combinations of technologies we bring.

Speaker: Yeah, that’s great. That’s really awesome insight. Well, you’re very kind to me. And I think, you know, we talked about patents, right? And I think there’s over 40 patent submissions this year. Yeah, we have 40

Speaker 2: patent submissions this year. Where we invented the world’s first Hive transformer.

Which, which, if, if you will indulge me for just a quick second. No, please go dig, dig in on that.

Speaker: That’s what we want people to know about, right? Oh, certainly.

Speaker 2: Well, thank you. So a hive transformer, let’s talk about bees. What do bees do? They make little bees, and they make honey. Where do they do it? In a hive.

How do they do it? They use honeycombs, these beautiful geometric, symmetrical, mathematical manifestations of a greater mind. Mm hmm. Okay? And so, they make these things, and they put in their nectar. So how do they know when the [00:07:00] nectar Becomes, honey, well that’s through a Hive Mind Index. So the whole point of the Hive Transformer, while it’s still a transformer model, is that it’s about pre curation of data and planned and purposeful storage of that data.

So rather than simply saying, hey man, and this is no hit on the beautiful work done in large language models, I’m a huge honk in large language model, right? I’m going to be everything to everyone. And the problem with that is ultimately at scale, if you’re trying to be everything to everyone, You’re not much to anyone.

Right. So what we found with the Hive Transformer is that we are able to, through NLP, collect a list from a person talking to an chatbot, and sub second, under, uh, ridiculous numbers, I’d have to look it up and tell you, but the benchmarks are all there, sub second response, we’re able to capture, build, and have immediately ready for reference these, these honeycomb transformers that form into a hive.

So this whole idea of the hive is marvelous because If you say to [00:08:00] yourself, what good is that? I mean, well, it’s honey, but if we step back a moment and say, why can’t we collect honeycombs into hives? Why can’t we aggregate individual actions into skills? And if we can collect skills, can’t we collect skills and aggregate them as talents?

Okay, okay. And if we can have talents, how best do we describe different persona? Okay. Perhaps uh, user, the human, needs talents for, diagnosing their SR, uh, 675 class version

Speaker: 2.

Speaker 2: In other words, what’s wrong with my computer? Right, right. Well, we understand from the communication, the keywords are triggered, we go to classification, we pull up the relevant hives, and now we’re into it.

Okay. And now, the chatbot would say things like, I can’t answer that. Or let me tell you once again, something you already know about the weather into. Oh, so we’re talking about this specific model of the Lenovo SR [00:09:00] 675. You need to configure and you need it diagnosed for the following

Speaker: logs. I’ve got you.

Awesome. No, this is great. Cause I think there’s a lot of focus around this. And I know from what we’re doing too, where, you know. You said it really well around the large language models. Things start to, the whole web starts to look the same. Everything starts to feel the same. If all you got is a

Speaker 2: hammer, the whole world’s a nail.

And that never

Speaker: works with panes

Speaker 2: of glass, brother.

Speaker: Exactly. And we’re, we’re looking at ways to kind of like augment that with like real time data or even local models and things like that. Like, and, and this sounds like a really interesting, uh, Well, the Hive

Speaker 2: transformer allows us to scale from way down on the watch.

Speaker: Yeah.

Speaker 2: All the way up to any kind of super computer you’d like to create. It was built. To be adaptive and in collective. So if we just watch nature, it gives

Speaker: us the answer to so many things Is this where you’re, are you guys looking at this mostly from kind of this consumer electronics angle at first or is it, is it bigger scale?

Like give us, give us a sense of maybe there’s something we could cover. Well, I hate, I hate to sound this way, but I am a bit of a mad

Speaker 2: scientist. I love it. I’m not really a mad scientist. The most I ever get is slightly [00:10:00] annoyed. I’m sorry, I’m using myself unduly. This is great. So, so I was looking at this and I was unhappy with the benchmark speeds with certain large language models.

And people would then invest in all this time, and everybody would roll their eyes and say, It’s time to train it again, let’s see what we get this time. And I’m like, no, we’re going about this wrong. We’re all about what can we get. We should be about, did we put it where we can find it? So I found myself thinking of all things, of the Amish, who used to have in their barns, labeled bins and drawers.

So they knew exactly where a screwdriver was, in what barn, in what bin, right, at what point. So I thought, well, gee whiz. was looking out at some bees in flowers and I’m going, Oh, no, it couldn’t be this easy. And so I went after it. And then as I, as I built the mechanism, I said, well, what are the applications, Jeffrey?

It’s not should never be tech for tech sake. We can take this all the way down to the phone. We can take this all the way wherever we want in the cloud. Wow. We even have a [00:11:00] project that I had codenamed. Hecate, which really had to do with humanoid robots being able to exchange information. Remember, we just talked about skills and talents.

Yeah. So, you have two humanoid robots. You have humanoid robot A, learning how to pick up a screwdriver. You have humanoid robot B, learning how to carefully swing a hammer. Now, these are close but disparate skills. So, if we use something as simple as Bluetooth and the two humanoid robots pass within sufficient distance They do a diff on hives.

And they acquire each other’s hives. Ah. So now we’ve just cross trained humanoid robots. So my, my work hits, and it’s a shout out to Drew, and Dave, and Ash, and Dinesh, and Arun, and Deepak, and everybody out there. Just wonderful. and they said, Oh, but this is just like Iron Man. Please can we call it.

code name Jarvis. Okay. And they squealed. So it’s become code name Jarvis. Awesome. And it works. The technology works because again, it’s [00:12:00] like, think of Legos. If you don’t like hives, think of Legos, you snap them together, they become what you make.

Speaker: right. No, I mean, and there’s so much like, so many people want to kind of dig in on this.

I know Brave too. We are too. I mean, you’ve got people with different devices and our whole thing is around kind of like, you know, privacy and making sure that we’re not like Uh, exposing users data out to folks that they don’t know. You know, it’s all about, like, kind of, you know, having that kind of ownership over it.

Well, responsibility. Yes, yes. And whether

Speaker 2: you’re playing AI, or you’re just doing regular technology that involves more humans, the ethical questions remain pervasive, but similar. Right, right. With an AI system, my whole approach to it has been proof of result and evidence of compliance. Mm hmm. If your system, snifty as it may be, can’t provide that We need to fix that.

Speaker: Yeah,

Speaker 2: And that’s the truth of it because you want to be responsible, you want to do the right things the right ways, you want the technology to support where you want society to go. Mm hmm.

Speaker: Now, I can’t think of a better segue. I know you mentioned before we, we, uh, we started here, there, there’s some [00:13:00] Smart City, uh, stuff with Lenovo.

Yeah, Smart City Barcelona. Can you, can you dig into a

Speaker 2: little bit about what that’s all about? Yeah, yeah, yeah, I’d love to. My pleasure. So, about five days before NVIDIA was going to announce that they were going to take the NIM AI blueprint and take it out. And so NIMS, let’s talk about NIMS.

Yeah, yeah, no, no, please unpack it. They’re a wonderful technology. NVIDIA Inference Microservices. This is what you get when you let engineers name things. Okay? What it really is, Luke, is a set of containers that hold specially trained language models, APIs, templates for usage, and more so that it’s self contained and pre tuned.

The advantage to the customer is that rather than have to go out and say, Okay, let me start over here. with this. And now let me see if I can’t knock it down to that, lose maybe two to three months trying to make it work. It’s prebuilt. It’s purpose built. So you can take that. So I was in the room when the term got coined down in San Jose by accident.

Okay. I think I lost my way and I was on my way to the men’s room and I just ended up in a conference room and the product manager who’s brilliant, shout out to that [00:14:00] fellow, wonderful guy that said, we’re going to, we’re going to do blueprints. And I’m saying you have geometric Very complicated things because now you’re going to have NIMS that plug in like LEGOs to other NIMS to solve more complex problems.

Why don’t you just call them recipes? Well, we’ve already called them a blueprint, so what they really are are recipes that fit a specific use case. So if you want a chatbot that’s also capable of x, y, z, such as in this case, uh, Metropolitan, NIM usage, which is a thing, then you would take and you’d have a blueprint for that.

So what we did down in Barcelona, and every time I call it Barcelona I get corrected by the good people in Spain because you pronounce it like that because the king did and I’m like, you got it, I’m cool. And what we did down there is they had said to me, can you invent a couple things for us? Nine days.

Doc, whenever they call me Doc, I know I’m in trouble. They said, Doc, can you invent a couple things for us before we go into Barcelona? And I said, when’s that? Nine days. And I said, I [00:15:00] pulled out the Star Trek response, I said, Dammit, Jim, I’m a doctor, not a vending machine. And they laughed, and they said, we understand.

And I kind of took it as a challenge. And so I went without sleep for a few days, and we ended up with a code named Vena, Visual Insight Network for AI, and that mention I think I might have done a moment ago, edgeguard, which I was told to call edgeguarding, and so, yes, Lenovo, I’m calling it edgeguarding.

Which really has to do with adaptation. Yeah. So with Vena, it’s about quality of service. And using predictive caching to identify different images. Okay. So that we can move relevant images at high quality across the network, as opposed to just a gush and stream of images. Okay. Wait a minute, let me try and gather it afterwards.

So it’s using AI to refine AI. Okay. And the edge guard has to do with a situation when we have a hybrid network that’s also connected to the cloud, but also connected to the traffic lights. Also connected to the stop signs, also connected to the kiosk. And all of a sudden we lose connectivity to the cloud.

Well, usually [00:16:00] that means game over for a little bit till we reboot. But because of EdgeGuard, what we do is it adapts. And it has localized history. And so now it forms a new network using the most recent history that’s stored locally. And it does its best until it comes back online. So you might see, during simulation, we saw an acceptable loss in processing speed.

But nothing that turned into all the traffic lights on West 23rd are off. right. So this is another example of hybrid AI, where we balance symbolic logic with machine learning to do more than just count beans, but to infer and to take those rules and be able to take action on those rules.

And that popular term today is agentic. And all agentic is, Luke, is any system that takes action. without direct human decision making. And we’ve had

Speaker: that a lot of years. Yeah, yeah, yeah. Okay?

Speaker 2: So it is a gentic. So when you combine different methodologies of artificial intelligence, and again, I have [00:17:00] such a problem with that term.

I’m so sorry. Because there’s no little man in a box. There’s no sentience. Right, right, right. It is a simulation, a simulcrum, an attempt to imitate that which we’re familiar with. And, and at some point I’ll bore you to tears about my ideas about robo psychology. No, no, I love it. I’m working on my third PhD, this one in clinical psychology.

Because Robo Psychology, Luke, isn’t about the singularities come and things are sentient. It’s about how do people emotionally engage and cope with assistants that aren’t alive and are only imitating people.

Speaker: So

Speaker 2: for me, Robo Psychology is understanding the human psyche in relation to our projected, simulations of intelligence.

Speaker: Interesting. No, I you gave a stoplight example. are there any others that kind of stand out like in the city context, like I think it’s super interesting for sure. So

Speaker 2: we talk about digital twinning and we talk about that as a means of planning the city. So we use digital twins to plan the city.

We use responsive technologies to manage the operations of the city. So what we really have the full range of our own minds, allowing us to [00:18:00] create Tools that help us go, where do I want to go versus where are we? And literally, where do I want to go? Because you have your traffic lights and your traffic intersections.

So what would be lovely to adapt further is an improvement such that you don’t just have the Garmin effect. No hit on Garmin, I probably shouldn’t mention a brand name. But rather than just following Google Maps and following all these things, you do more than store routes. You’re able to take your own, data on how you walk or how you ride or how you travel in an autonomous car, and then use that inferentially to determine the best paths and the best times for you.

So it really can come together in a beautiful way. to make the human experience very fulfilling. This sounds very useful for people.

Speaker: Don’t you imagine that?

Speaker 2: I know! I’ll never do that again, Luke. I’m sorry. You’re right. I was wrong. No, no, no, no. I love it. Oh, I’m just teasing. I love it. I’m glad you enjoy it.

I enjoy your time, too.

Speaker: Let me just, like, just for folks [00:19:00] that are listening, right, like, a lot of this stuff, I know that you all are developing at Lenovo, like, How open is this for other folks to start adopting? Or is there, are these tools that people can use like this, this hive, uh, uh, the hive transformer, where these types of things are, are there, is it something like proprietary to Lenovo or are you guys planning on deploy, deploy less time of the nature of the patent?

Right, right, right. We

Speaker 2: retain the rights and royalties for it for a period of X years from the United States government. So the idea is that we will serve our customers meaningfully. We give them unique advantage. as a result of these technologies, specifically the Hive Transformer and another element called the Symantec Cascade that takes RAG, Retrieval Augmented Generation, it takes us further into what I call cargo.

And so when someone asks me, cargo? And I go beep beep, cargo beep beep. And then after they hit me, we then talk about the fact that cargo really stands for Context Aligned Retrieval Generation and Orchestration. So we have that agentic element possible now, on the fly. So all of our customers Customers will receive [00:20:00] the right combination of these inventions relevant to their use case.

We’re also working on, uh, bringing them fully to production as products that standalone in and of themselves as a prescribable suite of tools. Oh, cool. Right. Yeah. And so as that goes on, you know, uh, what is it? No knock on taaka, huge. The same people who brought us the Agile method mm-hmm . Were Japanese linguists who made the, the point that ideas.

So, the whole argument to patenting isn’t so much to retain control as it is to educate the public. So, really, all anyone has to do to understand it is to read the patent or reach out to us at Lenovo, and I’m always happy to spend a little time with people to help them understand what we’ve created.

Speaker: No, that’s awesome.

That’s awesome. Yeah,

Speaker 2: I learned a lot of stuff that way myself.

Speaker: I mean, it’s out there. I don’t know it all. I don’t even know half of it. There’s so much out there too, right? Oh my god, there’s so much beautiful stuff out there. It’s paralyzing sometimes. It is. so I think, [00:21:00] like, what other new capabilities do you foresee coming from the integration of advanced hardware and software?

Speaker 2: Well, I think that, I think that what we can do is trust our imagination and sense of right and wrong and let this take us forward. This is why I always stress that, technology must serve the greater good to fulfill its purpose. So rather than opportunistic technologies, I would rather see we commit ourselves to enabling technologies.

There’s a wonderful startup coming out of NYU that intends to help the blind better navigate. And so we started to talk about using their actual sticks and building in sensors so that we’re capturing actual data live, but also trending that back. To a device could be their own phone again because the hive transformer technology be able to go so tiny Little units of honeycombs.

Yeah. Yeah. Yeah, it’s all about how many honeycombs did you need? How many hives do you need? Right, right, right So it’s a beautiful vision they have and their intention is to make life better for the handicapped. So [00:22:00] we invent Tomorrow to solve the problems of today.

Speaker: Yeah.

Speaker 2: Yeah, we don’t we don’t Just turn it up to 12 because there’s 12 on the dial,

Speaker: right?

Does that help? No, it does help. No, it does help. And that’s why I want people to hear these things, right? Because so much of people get a lot of the general public is scared around these things of AI and like, Oh, my job’s gone or terminators or whatever, you know, that

Speaker 2: whole robo psychology element that I want to bring to light.

Yes, I feel that as we bring in new technologies, the most important thing we can do is remember the importance of the human element. Yeah, great old Renaissance thinker counter Beato. Said Ano, which is, it’s essential to be human. Mm-hmm . So no matter how we may prize knowledge and the ability to make knowledge look like magic, it’s all about the people.

Mm-hmm . If we’re not helping the people, if we’re not relevant to the people, then we, we’ve missed the whole point.

Speaker: Let’s touch on why AI is better with Lenovo and NVIDIA. Of

Vocaster Two USB-1: course, of course, thank you. And that’s a [00:23:00] very kind softball of a

question. But in fact it’s actually true, because I’m absolutely no salesman. I’m just a dumb engineer, and today. So here’s the reason why. NVIDIA technology, specifically the wonderful work they’ve done with GPUs, and other components and software all around that.

The NIMS, the NVIDIA Inference Micro Services All of this beautiful ecosystem of software, in combination with Lenovo servers and technologies, which I hate to use the term best of class, really are best of class. We have water cooled capabilities on smaller servers. It’s kind of freaky it’s so good, alright?

But then you add in our 40 plus patents and our breaking in and setting, I think, a whole new direction in terms of innovation around AI, making it more human, making it more functionally agentic. I think that what happens is you end up, for the customers, what’s good for the customers, is that they end up getting capabilities they couldn’t have otherwise.

Vocaster Two USB: Awesome.

Vocaster Two USB-1: really, I think, why Lenovo and NVIDIA are best

Vocaster Two USB: I think it makes sense. [00:24:00] And, uh, you know, stepping back a little more. I mean, like, you’re an advocate for technology serving the greater good. We talked about some of the

Vocaster Two USB-1: Oh, I’m, I’m, I’m rapidly passionate about

Vocaster Two USB: Yeah. I mean, like, how do you see next gen enable AI enabling advancements in social justice and

Vocaster Two USB-1: that’s a beautiful question. I think the whole thing is around, how do we enable the greater good? Yeah. Because what we imagine is what we built, what we feed is what we find, right? That old story about, you know, do you feed the bear or do you feed the maiden, right? So if we’ve, feed for the greater good, we’ll create for the greater good.

It’s how we, how we direct our mind. And it’s sometimes it’s super unintentional and we get some amazing stuff. Yeah. You know, star Trek communicators to mobile phones. Yeah. So today we’re talking about ideas like the hive transformer. We’re talking about scaling down and scaling up. As we need. And it’s not so much a question of can we, but where do we want to go with it?

So, when you talk about serving the greater good, enabling social need, you have to allow people to have their authentic voice. You have to [00:25:00] allow people to have the capacity to offer their two cents. Their story. What are they going to contribute to tomorrow? And I think if everybody were to come to embrace that thinking, they’d realize that everyone has something to value to add.

And I think that the way that happens is, God bless open source, right? The Von Hippel mindset of innovation says, The community needs a barn. I don’t care if it’s making money, we’re building a barn. And you get a barn. And then you find out later that it makes money. And in contrast, you have the Chesbro, the Chesbro mindset of open innovation, which basically says, very positively, it’s Kickstarter.

If people want it, they’ll pay for it. And if people pay for it, we ought to go do it, because people want it. So in one way, it realistically says the authentic need can be expressed in a lot of ways, but usually through what people will pay for, patronage. So I think if we shift our models, especially at the corporate level, more towards listening to the authentic voice of the customer [00:26:00] and doing just enough research and development into the market to produce what’s needed, then we’ll be relevant.

And when we’re relevant, the greater, Gestalt of people means we create good things for people.

Vocaster Two USB: good things from it. Oh that’s a wonderful thing to talk about.

Vocaster Two USB-1: Oh my god, uh, that’s a wonderful opening and, uh, I think I just want people to believe in their own ability to imagine and build the future in a positive way with others. That it isn’t so much this idea

of closely held knowledge as a weapon. I think in the early 21st to middle 21st century that it won’t so much be.

Uh, Ciencia es Potencia, it’ll be Ciencia es Comercium. Knowledge is in power, knowledge is trade, and it builds upon itself. The beauty of open sources, we all make money as we can [00:27:00] off the models. And if we, we keep true to that thinking,

right, then we’ll have that. Now, patents tend to be very much a matter of holding ownership.

But they’re also good because they bring a methodical and structured way of sharing and showing how to do things.

So if we can kind of balance the two approaches, we’ll have a structured and methodical way of openly sharing information. And there’s Nonaka and Takeuchi’s upward spiral going all the way up, right?

Because if you have structure, and you share, and you listen to one another, then you create beautiful things.

Vocaster Two USB: Oh, that’s beautiful, man. Really appreciate it. Finally, where can people follow your work or if they want to reach out and say hello?

Vocaster Two USB-1: Oh my god, uh, so, so I am on social media from time to time. You can find me on LinkedIn. Lenovo, puts my blogs out from time to time, so if you go to the Lenovo and look up Dr. Jeff, you’ll find me, and I’ll be happy to [00:28:00] help and talk to anybody.

Vocaster Two USB: Excellent. Well, thank you so much, Jeff. This has been really great. Really appreciate you coming on. Love to have you on again too sometime.

Vocaster Two USB-1: I’d love to do that, Luke. I really enjoy speaking

Vocaster Two USB: Excellent, man. Thank you very

Vocaster Two USB-1: A genuine

pleasure.

Have a great one.

Vocaster Two USB: Alrighty.

Luke: Thanks for listening to the Brave Technologist podcast.

To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com. And start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • AI’s role in building smarter cities through Lenovo’s collaborations with NVIDIA and other partners.
  • How AI security is evolving with EdgeGuard and other cutting-edge protections.
  • The role of hybrid AI in combining machine learning and symbolic logic for real-world applications.
  • Corporate responsibility in AI development and the balance between open-source innovation and commercialization.
  • Why diverse perspectives are essential in shaping AI that benefits everyone.

Guest List

The amazing cast and crew:

  • Dr. Jeff Esposito - Engineering Lead at Lenovo R&D

    Dr. Jeff Esposito has over 40 patent submissions, with a long background in research and development at Dell, Microsoft, and Lenovo. He lectures on advanced technological development at various US government research labs, and believes that technology is at its best when serving the greater good and social justice.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.