Back to episodes

Episode 54

LIVE from AI Summit: The Impact of Strategic Fine Tuning on Achieving Peak Performance

Senthil Padmanabhan, VP of Platform and Infrastructure at eBay, discusses how eBay’s use of fine-tuning is improving their operational workflows. He explains why fine-tuning existing models can often be more cost-effective than training from scratch, and how methods like low-ranking adaptation (LoRa) are pivotal in improving AI performance.

Transcript

[00:00:00]

[00:00:08]

[00:00:41] So welcome to the brave technologist. How are you doing today?

[00:00:47] Vocaster Two USB-1: I’m doing great. thanks for having me look and brave. here. It’s a delight to be here.

[00:00:50] Vocaster Two USB: Awesome. Been looking forward to this one. so we’re here at the A. I. Summit in New York today. and I understand you’re one of the speakers at the event. You want to share with the audience a little bit about what you’re covering?

[00:00:59] Vocaster Two USB-1: [00:01:00] Yes, indeed. I’m speaking. So my talk was yesterday and it was about fine tuning. it was a panel and more importantly, it’s about how strategic fine tuning can help you achieve peak performance in the models that you’re talking about, right? A lot. I mean, we think a lot about AI models and a lot of advanced techniques we need to do, but a simple and more efficient fine tuning can get a lot of the things you need.

[00:01:20] And that’s what our talk was about.

[00:01:21] Vocaster Two USB: So in the context of eBay, how is a fine tuning the models enhanced to consumer facing applications?

[00:01:28] Vocaster Two USB-1: so, I run platform and infra at eBay. And, since I run, the platform, we are also responsible for, developer productivity or general employee productivity across the company. So, so I focus more on the operational workflows than the customer facing workflow. So let me talk

[00:01:41] Vocaster Two USB: Yeah, yeah. Let’s

[00:01:42] Vocaster Two USB-1: Let’s stick into that.

[00:01:42] Yeah. So, the biggest challenge that you face with any of the closed to our open models, right? Closed, I mean, proprietary models or open models. is that you hit the upper limit on productivity soon, right? I mean, there’s a ceiling there because there’s only so much they can do without the context of your inner workings, [00:02:00] right?

[00:02:00] I mean, that’s always the case. and, tools like Cursor or GitHub Copilot and all those things do a fantastic job in the editor, in the IDE, but it’s all in the context of what files you have been opening on and what’s available to it. But there is a ton of much more information that’s there in large technology organizations.

[00:02:15] So, let me call out one thing that, that fine tuning really helped us is see large technology organizations are generally any software organizations have to do software upkeep, right? They have to keep on upgrading their software or as they, they are left behind. I mean, I’m talking about upgrades like moving to the latest version of Java, Node, Python.

[00:02:33] if you don’t do them, tech debt keeps accumulating. And a lot of times folks don’t realize that. Tech debt is the thing that’s holding you back from moving fast. Right? And people don’t even talk about it, but not doing these upgrades or leaving your software stack behind is what it is. And there’s also another notion that, oh, if you fix a tech debt now, the returns comes only in the future.

[00:02:54] That’s not the case anymore. you fix tech debt, you see immediate returns. And, so this happened in eBay this year. [00:03:00] We wanted to upgrade all our application software from one version of JDK to another version. It was JDK 8 to JDK 17.

[00:03:07] and we did it this year, we did it across our whole application fleet, close to 2, 000 apps.

[00:03:11]

[00:03:12] Vocaster Two USB-1: and we did fine, and this is where fine tuning helped. The first two months we did a manual effort, and then we took the samples and fine tuned on top of GPT 4, and that made the whole process 25 percent more faster.

[00:03:23] Yeah, so that was huge, right? I mean, you just compare the number of days it took the first two months, and then after the fine tuning where it helped in the code generation part, with what the eBay was in particular doing with this migration, and then it became 25 percent faster.

[00:03:35] So it took less days. And similarly, we saw a similar effect on when we did the Spring Boot 2. 7 to 3 upgrade, it was like 30 percent faster.

[00:03:43] So, so these are all good things on how you take some good samples of manual done work, fine tune it, and then you get very good results. I’ll call out another thing.

[00:03:52] in big companies, again, there are a lot of, custom DSLs. I mean, DSLs are domain specific languages, right? And they are, they’re internal systems that helps you [00:04:00] author some capabilities. for example, in eBay, we have a DSL for a risk system. So it’s like rules that you write away. If a user tries to sign an end number of times, and if they fail, what should you do about it?

[00:04:11] Should you throw a capcha? Should you block them? And things like that. And this is,we have developers or engineers or even product owners, author a simple declarative language, like a YAML file. Which will put those rules into place. You don’t have to program. It’s a very declarative. You just say, if this, like something like that, and, and those rules get complicated a pretty spoon because you can put a lot of conditions and it takes a lot of time to author them and you can make errors too.

[00:04:33] So we did this, fine tuning, on top of, Code Llama 2. and that made the whole process, 50 percent faster. Look, people can now author those rules very quickly. I mean, it doesn’t have a company wide impact because it’s just risk rules done by one domain. But it’s still a small effort with outsized returns.

[00:04:51] that’s a good thing. in fact, for the next year, we are already, Want to create open AIs, not open AIs, open API specs, sorry.

[00:04:58] Vocaster Two USB: It’s all

[00:04:58] good.

[00:04:58] Vocaster Two USB-1: all

[00:04:58] about

[00:04:59] open AI these days.

[00:04:59] [00:05:00] Open API, open API specs. because we want to create an API catalog so that people can discover APIs and they can do a contract based development.

[00:05:09] Because without this API discovery, there’s a lot of duplication within the company or people have to coordinate a lot of coordination overheads. So we are trying to create, generate these open API specs. and, again, fine tuning is helping us because, some, some engineers did that, thing in the past and it’s working out here.

[00:05:25] So, just, just a kind of a high level question too. I mean, think about eBay, right? It’s like, it’s a huge, like a, it’s a staple, right? Of the web, and it’s been around for a very long time. Like, you know, Was there, there kind of like a mandate within the company to integrate AI or adopt it more heavily or is it something that you’ve kind of had to champion and convince people of or I’m just kind of curious like, No, I know. I think it’s, it’s, it’s actually both ways, right? Right. From the time the, in fact, it started when Codex came into play when it was not even known by 2022. It was the August of 2021, I guess, when Codex, came into play, when GitHub announced it. Right. it was, I mean, OpenAI announced it and then GitHub was the one which was, trying to use it.[00:06:00]

[00:06:00] Even that time, the copilot version using codecs, we became fascinated by that. so the push happens and naturally after November 22, 2022, everybody in the world were behind it, right? So, I mean, both top down, bottom top, it, it has to, it was inevitable. I mean, we had to do it.

[00:06:14] Vocaster Two USB: Excellent. Excellent. Yeah. And how has the reception been at the company when they see some of these results? I mean, like, these are big results, right? Like, are they looking at applying it to more areas?

[00:06:23] Vocaster Two USB-1: so I think,

[00:06:24] I mean, when, when we started, I mean, the first use case we tried was copilot itself, right?there was initial skepticism, to be honest with you, right? And especially with the more senior, like, very strong super coders, they initially felt a little bit of, lag in productivity, because they felt, oh, the suggestions don’t make sense.

[00:06:41] Like, because they are the ones who are, like the advanced programmers, right? There’s more senior folks. and they are like, there are a handful of folks in the company who are like, keep coding a lot. And for them it felt a little bit of a lag, but then later the chat feature is what, enabled them to be more faster rather than the code, complete feature.

[00:06:59] In fact, and that’s why [00:07:00] we did an AB testing in eBay. I authored about it in a block too. so when we launched copilot, we took 300 developers, within our company. one 50 was put in a test bucket, where copilot was enabled, and one 50 in a control bucket. Like that’s our control group.and we ran the test for six weeks and what we saw is at the end of the, the A B testing, people in the test, people who have copilot enabled, saw a 17 percent reduction in code merge times, PR merge times, and a 12 percent reduction in lead time.

[00:07:28] Lead time is the time when the, when you submit a PR and the time it goes to production, right?

[00:07:33] So that was very good.

[00:07:34] Vocaster Two USB: fantastic.

[00:07:35] Vocaster Two USB-1: them use smaller PRs.

[00:07:36] Maybe that’s the thing. I think it’s something there, right? I mean, of course, if you expand it to the whole population of the developers, the impact may be reduced a little bit, I guess.

[00:07:46] But at least in the A B testing, we saw a positive return. And that motivated us to forget the noise and then expand it. Now everybody in the company has access to all these

[00:07:54] Vocaster Two USB: That’s awesome. That’s fantastic. Yeah. I mean, so it’s going to, it’s kind of curious too, because a lot of our listeners might be thinking about kind of [00:08:00] ways to adopt this, right? Like, and with there being so many pre trained models available, like what kind of key factors should organizations kind of prioritize when they’re looking into fine tuning?

[00:08:09] Vocaster Two USB-1: Yes. I think the first, consideration that that organization should have is whether you’re going to go with an open or a closed model. Mm-hmm . Right? If you’re going to a proprietary model, then you don’t have to worry about any of the infra thing. Right. I mean, you, you have to worry about the bill, you pay to them.

[00:08:21] Right. But you don’t have to, you don’t have to host anything. Right. Right. I think that’s the first thing I I, are you going to go with a proprietary model, where the infra comes, to you and you pay an opex cost. Or, the CapEx cost of, hosting an open source model.

[00:08:34] So having said that, there are some things that are common, right? For example, you should have a data set, creation pipeline. A lot of, times folks don’t realize, in the whole ML workflow, the most time consuming part is the data set creation

[00:08:46] part. because you need to get the data from various sources, clean it up, and that takes a lot of time, and it’s not a very attractive work that it’s not like,

[00:08:54] technically appetizing work either, right?

[00:08:56] It’s it’s a, but it’s an important work that everybody has to do. and then that’s [00:09:00] one that’s common irrespective of which model you choose. because the data set is coming from you. the second or the last part, which is the evaluation, right? You need to also have an evaluation system in place, right?

[00:09:09] A system that continuously evaluates how your models are performing. That is independent of you choosing an open or closed model. So those two things are considered, I think I would, if any organization thinks about it, think about these two steps and have them in place. And the middle step is basically if it’s an open source model, then you have to worry about GPUs. Right? so you have to build a GPU cluster and then fine tuning is much, much cheaper, I would say, than post training, for example, where you need more GPU clusters and you need more time allocated for those clusters, and things like that. So I think those would be the key consideration.

[00:09:41] And I’d also say one thing, right? some techniques can make your life much better. Like, Laura. it’s a very common technique that I would encourage everybody to do. It’s a, low ranking adaptation. it’s a technique that makes the whole fine tuning much more, efficient. so what it does is, instead of updating the parameters of the base of the [00:10:00] foundational model, it focuses on, adapting to low rank matrix, which solves each of the tasks.

[00:10:05] so it freezes the parameters of the pre trained model, And, and learns these additional lightweight, matrices, which can encode, domain specific information. that’s a technique that’s very common, in the industry. And I think we also leverage that a lot within, within eBay.

[00:10:18] Vocaster Two USB: awesome. No, that’s fantastic. Yeah, it kind of brings me to the like, optimizing costs, you know, like, I mean, like, any pointers on how businesses can

[00:10:25] optimize cost? It’s such a huge topic, right? Like, you

[00:10:28] know, everybody wants to be doing this, but then everybody sees the bill. And, you know, obviously, like any pointers for folks on how they can optimize cost for their business with these types of things.

[00:10:36] Vocaster Two USB-1: Sure. See, first of all, we are talking about fine tuning, right? Fine tuning is a cheaper alternative than. Training a model from scratch or post training. In fact, we did this in eBay, surprisingly. We initially thought, for the code migration example I gave earlier, we probably have to post train a model with millions of lines of code.

[00:10:55] And we did that. We took some of our top repositories, which [00:11:00] had good code bases, and took around 75 million lines of code, and post trained on an open source model. but unfortunately that did not help us achieve the code. My, it helped us in other scenarios, but it did not help us achieve the code migration that we thought we would achieve, which fine tuning was able to achieve very, very with, with a handful of, or with some samples of,examples, which we gave it, but that was a very surprising take for us too, because we thought, oh man, it is trained with 75 million lines of code.

[00:11:27] and, and that’s going to be a big deal. And that did not happen. And, So that’s why I think you should think about those things. Sometimes fine tuning is the cheaper alternative. maybe sometimes it may not work. I do get it. I think so. Try the cheaper alternative and then go with the more, because we were giving six a one hundreds for this, post training process and it was consuming days for training these millions of lines of code, which could have been saved, but I mean, it gave us good, some of the results I talked about Laura in the, like, now, right.

[00:11:52] But there are also other, like the space is also evolving a lot. in fact, my team is looking into a new technique called, sketchy moment [00:12:00] matching. it’s a technique introduced by New York, NYU. Yeah.and, what does that does it? It talks about, I mean, that technique, looks into data pruning, to make, it was just, incorporates data pruning into the fine tuning, process.

[00:12:11] so, I I mean, it selects, the most important data points, and makes the whole fine tuning process more efficient. And, less memory intensive. there’s another one that’s recently launched called DAFT. D A F T. Domain Aware Fine Tuning. which again enhances the model’s adaptability to new domains.

[00:12:27] by incorporating some domain specific features while training. it’s again a relatively new thing. Even I’m not much aware into it. My team has started looking into it. but I think exploring these spaces, and generally thinking about fine tuning as a cheaper alternative. Rather than post training, which I, because a lot of people jump into post training.

[00:12:43] start with fine tuning. and then, look into these techniques when fine tuning really works, then look into these techniques.

[00:12:48] Vocaster Two USB: That’s fantastic. No, this is super helpful. I know people are going to dig this because, you know, We get these questions from people all the time. People are always just like super curious right now and it’s such a hot thing. And yeah, like especially people that have been in places for a while where [00:13:00] they want to be integrating these things, but they just, you know, don’t know these techniques.

[00:13:04] Right? yeah. So, I mean, what are some kind of common pitfalls or limitations that organizations should be aware of when they’re fine tuning?

[00:13:11] Vocaster Two USB-1: the most common one I would say is, quality matters more than quantity. And, and, and it’s really true in this case, right? if you are able to get really high quality, curated data That can do wonders to you. So I would at any time prefer thousand records of high quality rows of data set rather than ten thousand average rows.

[00:13:34] Right? I mean, this gets lost in some conversation. People always go with, okay, let me get a lot of, data sets, right? I don’t think that’s the case. I think you should, that’s one of the things people should be worrying about. Even if you spend time and you’re able to get lesser rows, but high quality rows do that.

[00:13:49] that was my biggest learning and any of the techniques I highlighted before, all of them follow this route. I think it applies to anything, every technique.

[00:13:57] Vocaster Two USB: That’s awesome. That’s super helpful. Yeah. How has AI [00:14:00] transformed eBay’s platform in recent years? I know we’re talking infra. Are there any other areas too?

[00:14:04] Vocaster Two USB-1: Yeah.

[00:14:05] I mean, I, when I think about this AI transformation, right. I think about any other transformation that has happened, right? Like let’s talk about the mobile transformation that

[00:14:12] happened in 2008, nine timeframe, right? It changed how companies work, including my own company. right. So I think slowly users started coming to e commerce like mobile platform to start shopping.

[00:14:25] and that changed the whole landscape. And, once traffic, once business moves towards that platform, I think all companies, updated their tech stacks to do that, updated their ways of working, updated how they design things. I think the same thing would happen in the AI world too. I think it’s going through the typical S curve growth.

[00:14:41] I mean, there’s a lot of hype when it launched, as usual, and then the hype withers off in some time, and then real use cases start emerging. and that’s why, I mean, there’s the S, it declines a little bit, and then now it’s the time where the real use cases are emerging, and it’s going to have a period of rapid growth.

[00:14:56] And then again, it saturates after a period of time. and that’s happening, within eBay, [00:15:00] too, which I’m seeing right now, right? Because anything we do in eBay, we ask this question, you know, hey, can JNI do the task better, or can it make a Can it be incorporated into the workflow? Alright. In fact, On my side of the operational world, SRE is another area where we are looking at where LLMs can do a brilliant job.

[00:15:16] Because you get all these logs into the system and finding anomalies or finding an RCA and things like that now becomes super easy with it, which was never the case earlier. Right? And the typical things that you hear from every other company on customer support is and, other areas like Slack support

[00:15:31] channels these days, right? I mean, they’re, they’re, they’re really a good job. Your human doesn’t have to come and answer the same question that, that has been answered earlier in the past. I think that’s, the information discovery is, is a great thing. That’s Aspen. I’m even in product building. another area I will call out is this, This whole thing with Gen AI and all the tooling and all the new capabilities that all these companies have given us, Has allowed us and allowed me as a leader to also explore the opportunity space much more before I make a decision on especially like one way door decisions, or [00:16:00] especially these, these really important decisions that you that you have to think through before making because they are all one way doors and you make them you have an impact for multiple years.

[00:16:08] Right? there is this thing in information theory that, we say that when uncertainty increases, the value of deferring work also increases when they suggest to defer work. But there’s a cost with deferring work too, because, opportunity cost. Yeah, so now my thing is that, I can make a more informed decision because, let’s say we have to make a call on whether I have to go with framework A or B, because they’re all big decisions, right?

[00:16:30] We are not going to just switch between these frameworks, or I’m going to re architect something, or I’m going to upgrade to something. we can quickly go and test out this with all the new Canvas tools, or all these new, artifacts, and all these tools that have been available to us. although we don’t even know that domain yet, just to get a sense of how the environment will be and it helps me make a more informed decision.

[00:16:49] Yeah,

[00:16:49] Vocaster Two USB: yeah, that makes sense. No, it makes a lot of sense. I think, you know, and I know we kind of touched on this a little bit earlier in a specific use case, but like, Are there ways that,eBay is kind of like, leveraging AI to help [00:17:00] around the areas of fraud or, or, risk, you know, you want to go any deeper on any of that or any, yeah, I mean, it’s such a big area, right?

[00:17:07] Vocaster Two USB-1: That’s true, but I would say this, right, look like domains like identity, risk, fraud and things, they have been doing ML based anomaly detection for ages. Even before the whole Gen AI era started, right? Yeah. that’s a whole new research area in the ML world itself. Yeah. and they have been orthogonal to this whole GNI and they have been making tremendous progress.

[00:17:25] I mean, having said that, of course, GNI has rapidly accelerated many things, right? I mean, whenever text is involved, GNI comes into play, right? and a lot of, research and advancements that’s happening in the GNI world is also, helps in terms of complements this area of research. So they both, go hand in hand and this area is as, growing as, as quickly as possible and GNI is just making it even more accelerated.

[00:17:48] I think, I would not, differentiate them, I mean, I would differentiate them in terms of their capabilities because they’ve been independently, each being making a lot of progress, in the last couple of years.

[00:17:56] Vocaster Two USB: well, I don’t know. We’ve been talking a lot about operations, right? But like [00:18:00] in the area of e commerce, like you really see some ways that ebay can be using a I to kind of revolutionize the e commerce experience for customers, but also for small businesses emergence, right? There’s a big economy within ebay of small businesses, right?

[00:18:12] That have been using the platform

[00:18:13] Vocaster Two USB-1: Yeah. anything

[00:18:14] you

[00:18:14] want

[00:18:14] to

[00:18:15] cover on

[00:18:15] that.

[00:18:16] yeah, I, I mean, it goes back to my previous point on mobile,revolution, right? See, when we, when, when people found out that they have a powerful computer with them always, With a powerful camera, it changed the whole landscape, right? on how e commerce is done.

[00:18:28] I think the same thing is going to happen with the AI world too, right? You’re going to have super tailored experience for our customers. we are going to have agents that do a lot of heavy lifting for them. I think all that is coming up, right? but today I can highlight on how, our listing flow in eBay.

[00:18:43] I mean, eBay being a two sided marketplace has its own. I mean, that’s where eBay comes into play. I think that’s the uniqueness that eBay can bring in, right? It’s, it’s a, it’s a pure two sided marketplace. And. Today, eBay helps a lot of our C2C sellers, which are like small businesses or like normal sellers, like who don’t have access to this [00:19:00] advanced teams, like our business sellers who have access to resources, who can, who can hire professionals to take pictures of their items and all this whole pipeline of workflow that they create.

[00:19:10] Small sellers or regular sellers, customers do not have access

[00:19:13] to that. Right? B2C sellers are different than C2C

[00:19:16] sellers. that’s where eBay does make that the tools that we are providing now for the listing flow just democratizes this whole world because you don’t have to have this fancy access to a photographer or a camera.

[00:19:26] You just take a picture with your phone and that makes your photo looks very professional. Yeah, it gets all the description from your photo. it really seems magical. You should try that out.

[00:19:35] Vocaster Two USB: Yeah, no, I mean, this is one of the things I was really excited to have you on to because if you look at kind of like the genesis of and how the Internet’s progressed, right? Over the years, like eBay was like really revolutionary when they first hit the market. And now even now you guys are still applying ways and but breaking it down to kind of that like peer to peer kind of sales experience in ways that you guys can kind of tune that as well.

[00:19:54] In

[00:19:54] addition to like all the other things that businesses can do is super interesting. I love it.

[00:19:58] I love see that you guys getting his hands on

[00:19:59] Vocaster Two USB-1: [00:20:00] think that’s, that’s an area that eBay is really focusing on. I think as soon as we launched it and we are iterating on it rapidly, but adoption from our sellers, especially the small sellers, customers,C2C sellers, it’s been fascinating and a rapid growth. Then another area that eBay will truly distinguish itself is also authentication guarantee.

[00:20:16] So eBay does this program called AG. We call it internally AG with this authentication guarantee, which is to make sure that, uh, I mean, a lot of times people buy products on eBay that’s pre loved or things that, the sneaker that you don’t get anywhere, the trading card, or these,expensive handbags or these Rolex watches, right?

[00:20:31] And we need to make sure they are truly what they say they are. And that’s why eBay has this thing where we have these warehouses across the country where these products first get shipped to them and those authenticators validate them, whether it’s a true Rolex or it’s a true handbag that they are calling it is a handbag or it’s the true trading card or sneaker.

[00:20:49] Right. Yeah.

[00:20:49] Vocaster Two USB: Yeah.

[00:20:50] Vocaster Two USB-1: and all those things, and then only when they put the seal in it gets shipped to the, to the buyer.

[00:20:55] Interesting.

[00:20:55] Right. So this is where, again, this is eBay’s, stronghold. Yeah. [00:21:00] And you can imagine how a multi-model LLA language model can, or a, or a gene AI model can come and play a role here,

[00:21:06] right? Yeah,

[00:21:07] Yeah,

[00:21:07] A, a GitHub picture and a video and along with some description.

[00:21:11] Vocaster Two USB: Yeah.

[00:21:11] Vocaster Two USB-1: it can clearly do the authentication job for you, and that will be a thing where eBay is going to contribute to the e-commerce world.

[00:21:16] Vocaster Two USB: Huge.

[00:21:16] I

[00:21:17] mean, like I used eBay to get,old golf

[00:21:18] clubs, like from different sets, you know, that

[00:21:20] are harder to find, but I still kind of am weird like that and I want to find them. But, but yeah, and there’s always this question of like authenticity. Is it the actual thing or is this some knockoff or whatever?

[00:21:28] No, it’s fascinating. so when, when you think about fine tuning methodologies and, and kind of, being on the forefront of engineering and AI in this area, where do you see us going in the next three to five years?

[00:21:37] Vocaster Two USB-1: See, it’s very difficult to predict anything

[00:21:39] Vocaster Two USB: I can imagine. Let’s give it one to two years then maybe like where where you thinking like when you look on the horizon like where, cause I mean obviously you’ve seen you’ve started to see some payoff with what you’re doing internally like, but now where’s your head going with it?

[00:21:54] Vocaster Two USB-1: Yeah, I see the thing with the airspace is it evolves so much, right? You go for a week on vacation and come back, you’re [00:22:00] completely outdated, right? You feel like you’re new to the industry. So I’ll say this thing, though, right? I think fine tuning will slowly evolve into few short prompting.

[00:22:10] I think it’s just some handful of really good prompts

[00:22:14] will change and general model to an expert model.

[00:22:16] I think that’s one thing that I can predict. In fact, The announcement that OpenAI did last week on reinforcement fine tuning, did you, I don’t know whether you saw it,

[00:22:24] so one of their, see that’s what I told, right, the field

[00:22:26] evolves.

[00:22:27] Vocaster Two USB: so fast. Yeah. Mm

[00:22:28] Vocaster Two USB-1: reinforcement fine learning. it was announced during the, the 12 days of shipping that one of the announcements that came and that was really intrigued, that really intrigued me because what they do there is you, they just get a very small subset of, unlabeled records or data set, right?

[00:22:44] I mean, in the past, it was supervised fine tuning where you had to give them a label record. In this case, it’s, unsuper like, it’s unlabeled records of even thousand records. you give it to the system, and then you give a separate validation set, which is independent of what you uploaded. There’s no, like, it’s [00:23:00] about the thing.

[00:23:00] And then they have built in graders inside the system, which will generate the respond. They run the system, they make sure whether the predictions work with the validation, And then the greater sort of enforces the reinforcement loop inside the system. and it makes the system much, much smarter. so I think they are, they are giving it for, I think that are some experimenting with it.

[00:23:19] I think that’s the one, that’s the direction the world is heading into

[00:23:22] prompt

[00:23:23] engineering is sufficient. And I think reinforcement fine tuning is heading in the direction where they are saying that easily you can build these experts, with, really, like really high quality, small subset of data and that is low cost and it happens quickly.

[00:23:35] Vocaster Two USB: That’s awesome, man. We covered a lot today. I mean, is there anything we didn’t cover that you want our audience to know about?

[00:23:41] Vocaster Two USB-1: No, I know. I think, what I would say is this, right? this space is evolving very rapidly. I think it’s very easy to get lost with everyday news that comes in.

[00:23:49] so my suggestion, which has always been the case within my company within my team is, don’t get carried away by the new fancy thing that you see every day, right?

[00:23:56] I think have some goals that you’re going to be and have very specific and [00:24:00] measurable goals on what you’re going to accomplish for that year for that quarter. And try to go and get them. And, and I think that’s the thing. Don’t get carried away by a lot of these things. have clear things on what do you want to accomplish and go behind them.

[00:24:11] and you don’t need all the things that you see

[00:24:14] Vocaster Two USB: All the bells and whistles, right?

[00:24:15] Yeah. Yeah. No, I think that’s a great pointer. finally, where can folks follow you online, or follow your work or say hello or

[00:24:22] Vocaster Two USB-1: So I think I’m on Twitter

[00:24:24] actually on that X,

[00:24:25] Vocaster Two USB: Yeah.

[00:24:25] Yeah.

[00:24:25] I know.

[00:24:25] Vocaster Two USB-1: central underscore high,

[00:24:27] active on LinkedIn too.

[00:24:28] I think you can find me.

[00:24:30] Vocaster Two USB: We’ll include both in the

[00:24:31] show notes. Yeah. Yeah.

[00:24:32] Yeah.

[00:24:33] Well, thank you so much for coming in.

[00:24:34] I think our audience learned a lot about what you guys are doing eBay and just fine tuning in general. I really appreciate it, man.

[00:24:39] Vocaster Two USB-1: Thank you. Thank you for having me again. Super delighted to be here.

[00:24:41] Vocaster Two USB: Excellent. Excellent. Thanks.

[00:24:43] Luke: Thanks for listening to the Brave Technologist podcast.

[00:24:47] To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com. And start using Brave Search, which enables you to search the web privately. Brave [00:25:00] also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • The impact of strategic fine tuning on achieving peak performance
  • AI’s broader impact on eBay’s infrastructure, fraud detection, and customer-facing experiences
  • The challenges of tech debt and software upgrades
  • Case studies from eBay that illustrate productivity gains via fine-tuning
  • Considerations for choosing between open and proprietary AI models
  • The importance of data quality over quantity in fine-tuning processes
  • Predictions for the future of AI development and fine-tuning techniques

Guest List

The amazing cast and crew:

  • Senthil Padmanabhan - VP of Platform and Infrastructure

    Senthil Padmanabhan is a Vice President and Technical Fellow at eBay, where he runs Platform and Infra (private cloud) Engineering for the global eCommerce marketplace, providing infrastructure for 4000+ engineers to deliver products at scale. Before this role, he was in the Product engineering space, building consumer-facing applications at eBay and Yahoo.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.