Back to episodes

Episode 55

The Role of Analytics in Shaping the Future of MLOps

Sophia Rowland, Senior Product Manager at SAS, discusses her journey from data science to product management at SAS, focusing on the integration of AI and analytics. She explains the concepts of Model Ops and ML Ops, the challenges organizations face in operationalizing machine learning models, and the critical role of analytics in this process.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Maltz, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:29] You’re listening to a new episode of The Brave Technologist, and this one features Sophia Rowland, who is a senior product manager focusing on model ops and ML ops at SAS. In her previous role as a data scientist, Sophia worked with dozens of organizations to solve a variety of problems using analytics.

[00:00:44] As an active speaker and writer, Sophia has spoken at events like the AI Summit, All Things Open, SAS Explorer, and SAS Innovate, as well as written dozens of blogs and articles. She holds a bachelor’s degree in computer science and psychology, and a master’s of science in quantitative management. [00:01:00] In this episode, we discussed dependency management errors that occur when IT and data science teams work in silos, the role of analytics in shaping the future of MLOps, scorecard generations and scoring functions, The connection between algorithms and psychology, using data and software to tap into motivation, how to discern hype for meaningful advancements in emerging technologies, and more.

[00:01:21] And now for this week’s episode of The Brave Technologist. Sophia, welcome to The Brave Technologist. How are you doing today?

[00:01:31] Sophia: I’m well. Thanks for having me, Luke.

[00:01:33] Luke: Yeah. Yeah. I’m looking forward to this one. How has your role at SAS kind of enabled you to bridge the gap between data science and product management in AI and analytics space?

[00:01:42] Sophia: Interesting question. So when I started at SAS, I actually started as a data scientist. Specifically focusing on working alongside our financial services clients in areas like advanced analytics, optimization, text analytics, and model ops was actually a big [00:02:00] area. And as I was speaking alongside these various organizations, something that popped up.

[00:02:06] Over and over again, we’re problems around how they take the models that the data scientists use and how they get them in a place and form where they can be used to actually make decisions at an organization. They have a lot of models, but what next? And it was a large gap that various teams faced. Kept repeating over and over.

[00:02:26] And it kind of became my passion to dive into this area to problem solve, to figure out how we help these organizations get from having models to using those models. And ultimately what happened was the previous product manager for our model app solution at SAS had left. And I dove into that role. I jumped in.

[00:02:46] I was really excited to help out because I was bringing in this experience as a data scientist, having worked alongside these teams. Wondering, well, what do we do with our models next? How do we get them in a place and form where they [00:03:00] can be used for decisions? And so for me, as a product manager, I really use these data science skill sets to really think like a user, to think like how our organizations are going to be using this product so that we can build software that best fits their needs, that helps them with their underlying pain points and limitations and gaps that’s easy to use so that we can help them get from point A to point B.

[00:03:25] Luke: Oh, that’s great. A lot of our listeners are kind of getting a bit more education on the AI space and might know about models and things like that. Maybe for those unfamiliar, can you explain a little bit more around what model ops are and ML ops are? Because I know you mentioned it a couple of times and it’s pretty interesting stuff.

[00:03:43] And then why those things are successful, I mean, are key to success in AI projects.

[00:03:49] Sophia: Yeah. So how you define ModelOps and MLOps, it’ll vary depending on who you ask and when you ask. The definitions of each have evolved so much over time and [00:04:00] even across analyst organizations, they’ve been defining it differently.

[00:04:03] The way I think about it is I think Machine learning operations or MLOps really arose from the needs of I. T. and engineering to create a repeatable and standardized process for how models move from data scientists to their production environment. So it is historically been more of an engineering heavy term.

[00:04:24] Model ops or model operations has been more of a Business focus term. It’s really more focused on how you use analytics as a business to make better decisions to improve outcomes. A Model Ops process, in my view, encompasses MLOps. The capabilities that are typical of an MLOps process are still necessary with Model Ops, but I think it’s not just the engineering skill sets that are involved.

[00:04:49] I think you also need to think about the other users in the process, your end users, your business users, even the managers and the executives who are making the decisions. Business decisions. So that is a [00:05:00] successful process. And I think we’re really starting to see this greater emphasis on model ops and Emma lops now because we are getting a lot more successes with analytics.

[00:05:11] Organizations are getting to a point where they are building models successfully and then Again, they ask the question. Now, what do I do with this model? I put in all this time investing in getting my data clean and hiring data scientists to build these models. And I have these models. But next, I mean, ultimately, when it comes to that, defining the terms, I try not get it.

[00:05:33] To be too bogged down in the differences when I introduce myself, I just say both because they are so overlapping and similar and definitions change based on who you ask. I think ultimately, I focus more so on how we’re using analytics to improve outcomes as an organization and kind of leave it at that, because if we get bogged down in the ops, I mean, you got AI ops, LLM ops, FM ops, everything is going to be an [00:06:00] ops.

[00:06:01] Luke: Right, right. And I mean, I imagine, too, that like, I mean, well, AI has been around for a while, like, and especially given, you know, your experience with, like, product management, like, it still seems like there’s a lot of discovery around product market fit that companies are trying to figure out, too. So I would imagine it’s a bit of a moving target.

[00:06:19] In that area as well, like, are you starting to see kind of more market fit happening with, with these AI models in the business cases that are happening? Cause I know if you, if you listen to kind of, you know, the noise that’s happening around, it’s like, everybody’s talking about AI everywhere and it’s like a ubiquitous thing, right.

[00:06:34] But like realistically, like ROI is going to have to be a forcing function on, we got to start making money with this stuff. Right. Like, and so, you know, how, how much are you seeing kind of market fit happen from your point of view and how is that influencing the model ops work that you’re doing?

[00:06:49] Sophia: I think it’s been a very interesting space in the field because we are moving past experimentation to organizations actually getting things into production, but the open question [00:07:00] is.

[00:07:01] Is what I built valuable? Am I actually doing something important for the organization? Is my model performing as effective? And I think that’s been a gap at a lot of organizations is actually monitoring the effectiveness of their model, the performance of their model, whether their model is up. And running successfully and as expected, and being able to find those issues and address them quickly.

[00:07:25] So we’re getting to a point where organizations I’ve seen are now starting to understand that. I felt this model in production, but I can’t calculate the ROI. I can’t calculate whether or not I’ve. Actually generating business value because of it and that as a product manager, it starts to get back to how we build out our product.

[00:07:45] Folks are much more interested in the return on investment in the monitoring pieces to really understand the effectiveness of their analytics process. So when we’re looking at The market overall, just because of the advancements in AI and where [00:08:00] people are, we are starting to see a lot more demand for calculating return on investment, calculating value and understanding the effectiveness of our systems overall.

[00:08:09] Luke: Awesome. No, that’s super interesting. I mean, I think, you know, especially kind of at this phase that we’re at now, where you’re starting to see things showing up, you know, in products, like you’re saying. What are some of those, organizational challenges that you’re, you’re facing when you’re operationalizing machine learning models, are there any pointers that you have, like for other people that might be kind of facing them of, of like, or, or any kind of, tips or, you know, just any interesting story you might have around, around those challenges and, and finding solutions for them.

[00:08:39] Sophia: Sure. So I’ve talked to dozens of organizations in this space and What? There are multiple problems. This is not just one problem, unfortunately, but some of the things that I’ve seen folks have issues with is dependency management. So ultimately, you’ll have a data scientist. They’ll have their Their computer, their laptop, they have a Jupyter [00:09:00] Notebook and they’re downloading the versions of Python and packages that they want.

[00:09:04] And then somewhere outside over here in the organization, they have their validation and their production environments where that model ultimately have to live to be incorporated into their business processes. And there’s a mismatch. What the data scientist has downloaded and is using is not what is in production.

[00:09:22] And ultimately, if you have a. Organization where I T and data science is siloed, you’ll have the data scientists passing over their model and their assets to I T and I T trying to figure out why is that the model running in production? Because these dependency management errors are not always easy to debug or understand and has more changes come to python packages.

[00:09:48] It just becomes more prevalent. And so when I’m looking at some of these organizations trying to you. You know, some of it is a governance aspect. we have to have some kind of control over the environments that our data scientists [00:10:00] are using to develop models. But it’s also how do we get our production and our validation environments to somewhat match what the data scientists want to use?

[00:10:09] And an emerging technology that’s been very useful here has been the use of containers for model deployment. So can we take dependencies that the model needs, Put it in a lightweight container. We don’t have to have necessarily everything the data scientists is using on their environment, but just what the model needs to run and execute.

[00:10:27] And then that way we have a lightweight version of the model that can execute and more common ecosystems like Cooper daddy’s like Docker just to make it a little bit easier for I. T. Of course, there’s still organizational issues. There’s other things I’ve seen an organization or just poor communication.

[00:10:46] So many silos between data scientists, between I. T. Risk teams. So if you’re in financial services, you sometimes will have teams that are just looking at modeling risk. Of course, you have business users and just how do [00:11:00] you get all of these folks working together towards a common goal? Also, looking at things like a centralized area for modeling resources.

[00:11:08] Again, sometimes organizations simply don’t know where their models are. They don’t know what’s in production. They don’t know where it is. And so just having a way to pull all this information together into one seamless location that Folks can interact in a self-service manner has been, very helpful for some of these disparate models all over the organization.

[00:11:30] Or at least it’s a step in the right direction.

[00:11:33] Luke: It makes sense. Like a lot of kind of those practical things where, you know, businesses are, are businesses, right, . getting everybody to be on the same page is, is gotta be an interesting challenge too. And I’d imagine too, you know, make sure the right feedback loops are in place too, is probably pretty critical for this type of stuff I would imagine.

[00:11:49] SAS is known for, like, a strong focus on analytics. How are analytics shaping the future of MLOps and what role do you see analytics having in the broader landscape? [00:12:00] It’s kind of related to what we were talking about, but, but let’s specifically kind of drill down into the analytics side.

[00:12:04] Sophia: Yeah. I think what’s unique about SAS is that we’ve been in the statistical software space for so long that the earliest versions of SAS software came out about 50 years ago, which is, I think, absolutely amazing.

[00:12:18] When you think about how long software typically has been around. And so SAS has been really focusing on building statistical models later analytical models and AI for decades. And I think what SAS has seen is they’ve already come up against the, well, what now I have a model. What do I do with it?

[00:12:37] Question years ago, the product that I work on specifically, the first iteration of it for model management, deployment, testing, it came out 15 years ago.

[00:12:49] Luke: I don’t

[00:12:49] Sophia: want to tell people where I was 15 years ago, but it was, it’s absolutely amazing that the product was. We’ve encountered these [00:13:00] issues before and we’ve already started thinking about it now.

[00:13:02] I don’t work on the same version of the product. It has evolved quite a bit over the last 15 years, but it’s, it’s very interesting because it gives us a lot of time to think about the process to think about the collaboration, to think about the automation. What can we do to make this to make this easier for folks?

[00:13:21] What can we do so it doesn’t involve advanced skill sets? So it doesn’t involve a ton of time. And what can we do to take the Industry knowledge, the knowledge that we’ve gotten from working with so many businesses and infuse it into a product. So I think what we’ve gotten because we’ve encountered this problem so early was we’ve really gotten a ability to really think through how to make this process easier and accessible to a variety of users, even as analytics continues to grow and expand.

[00:13:53] Luke: Kind of related or unavoidable, right? And some of these, these contexts, but how much is these kind of [00:14:00] compliance issues and things like that? I know, obviously, like, privacy is 1 area that’s recently, you know, with GDPR and everything like that come to the forefront, but now you’re starting to see more and more of a focus on.

[00:14:10] focus around, you know, regulatory compliance or just even getting, you know, policymakers a basic understanding around how AI works and how these use cases can be, how much of that side of the equation impacting your work so far?

[00:14:26] Sophia: I think quite a bit. We actually here at SAS have a whole data ethics practice.

[00:14:32] So it’s a group of individuals who are. Really diving into this work and even in some cases influencing this work. So our vice president of the data ethics practice, Reggie Townsend, he sits on several AI advisory councils to help really shape the understanding of, well, how should AI be used in a trustworthy and responsible way?

[00:14:54] And I’ve been working personally around with several folks from our data ethics practice on the [00:15:00] features we put into the product. So last year we. Took a lot of the recommendations from the NIST AI risk management framework and started mapping them to tasks within the analytics lifecycle to things that specific users could do to create, you know, an attestation or documentation that they are acting in a responsible manner.

[00:15:22] So we are. Observing it and working with it, and I’m very glad to have a data ethics practice to really help translate some of these different, more legalese into things that I can actually build into a product.

[00:15:36] Luke: Yeah, that’s really interesting. Maybe we can go a little bit deeper on this one because like, when we’re thinking about things like ethics, right?

[00:15:41] Are there specific types of, like, things you’re looking for, like, with, around ethics that, or, or situations to try to avoid, I guess, in, in practical use for these things? So I think that’s one of the areas that’s, like, not really well understood by people that are just kind of getting into this space.

[00:15:57] But, maybe you can, like, help,fill us in a little [00:16:00] bit on, like, certain things that you’re looking for on the ethics side or, or, or proactive decisions, right? Like, or just some examples. It doesn’t have to be anything where you’re giving anything specific away, but anything that might be helpful.

[00:16:10] Sophia: Sure. So I start by thinking about the AI system as a whole and understanding how we want it to fit within our tech stack within our business processes. And just understanding, you know, first, what are we trying to do with it? What is the value thinking through? What are the potential limitations for it?

[00:16:30] Because we’re not going to build a perfect AI system, but are there things that we might need to be aware of so that we can mitigate them? You know, data privacy is, of course, a big risk. So we need to make sure that we are looking at it. How we are handling the data, how we are having the least amount of privileges that someone needs to access the data.

[00:16:50] We also need to think about the impact of our analytics. Are we going to be. Taking and making disparate impact [00:17:00] treating groups differently. And so there’s a few different things that we can start to do as we design the system. So things that, you know, if I want to look and see, is my system treating groups of protected classes differently?

[00:17:13] Am I making a recommendation more often for, Someone of a particular class versus others, but it’s not just the average prediction of our model. For example, we also want to look at accuracy of our model because that’s been a big impact as well. There’s been, you know, so many news stories about. Computer vision models that have been less accurate on black individuals.

[00:17:38] And because of that, these individuals have been arrested disproportionately, even when they’ve done nothing wrong. And so we really need to think about, you know, the impact of our system and the impact of. What if our system gets it wrong? What goes awry whenever that happens? And if there’s any way we can actually, we can mitigate or limit that happening to [00:18:00] really determine, is this is this the system we want to build?

[00:18:03] Luke: No, that’s awesome. Yeah, I really appreciate the context too, because I feel like, you know, a lot of people don’t really, you know, people that aren’t on the inside of these organizations and aren’t kind of like working at the level you are, they aren’t necessarily like. aware of just kind of like how much thought is going into, you know, mindfulness around kind of these different kind of situations or people might just not even be really familiar with that.

[00:18:25] They are things that are, you know, being looked out for. So I think, you know, I really appreciate the context there because I think it’s super helpful. Are there any, can you share an example of a SaaS product or solution that’s made, you know, a significant impact on how organizations leverage AI that comes to mind?

[00:18:41] Sophia: I think back to the dependency management problem and what we’ve really been putting a lot of focus on is. containerizing these models, making them so that they have the minimum viable footprint that they need to execute the model and have it running on standard tools. That’s been something that we’ve [00:19:00] really been focusing quite a bit on and even optimizing those models where we can.

[00:19:05] So we have, some really smart folks here at SAS that have been working on score code generation for a model. So of course your data scientists builds out their training code, and they know how to. Build a model object. If it’s Python, it might be, for example, pickle file. We also need to think about the scoring code.

[00:19:23] So when we go to use that model, we typically have some sort of scoring function that takes in input data in some form, does some pre processing on it. Loads, for example, or pickle model or any other binary version of the model that we have and then executes the model does pre post processing on the data and returns it.

[00:19:44] So we’ve done quite a bit to focus on how we make that scoring function faster to really speed up our scoring speeds and just again, using our knowledge of modeling and analytics to optimize what’s [00:20:00] executing within those containers so that it’s faster. We’re getting a lot of the performance and speed improvements.

[00:20:05] Luke: Awesome. That’s great. How have your studies in psychology influenced how you think about user behavior and AI adoption? Can you share any instances of where your understanding of psychology kind of influenced how you approach the challenge in AI or analytics? And I know we talked about ethics. Maybe it’s something there.

[00:20:22] Maybe it’s something else. I don’t know.

[00:20:24] Sophia: I think that’s an interesting question because in undergrad people have always questioned like, Why did you study psychology and computer science and honestly is because I liked both and psychology. I did a lot with Research, statistical research and a lot of applied statistics, but then in computer science, of course, you learn how to program, but I really love diving into algorithms.

[00:20:45] And so when I got my master’s degree, that’s where I kind of cemented the two into analytics because I could code. I understood statistics and algorithms, and ultimately I got almost a data science degree before data science programs were very popular. [00:21:00] Working as a product manager, one of the things that I really focus on with the product.

[00:21:06] And working with our developers is really thinking about motivation. How do we get folks interested or even excited about doing something? How can we tap into what they’re passionate or curious about? How can we demonstrate value to them? How can we show them that maybe this will make things easier?

[00:21:23] Maybe this will make you look smarter? And so just kind of understanding how we get individuals motivated to do tasks within our software and outside of it. I also really have done quite a bit using psychology when I’m thinking about communication.

[00:21:41] Luke: I think

[00:21:41] Sophia: communication has been a skill set that was kind of left aside for a lot of technical skill sets for, you know, quite some time.

[00:21:49] And it’s actually been really helpful because you can really go Really far. If you can speak to technical users and business users and explain these technical concepts [00:22:00] and ways that they understand. And so I’ve really been thinking about how I present various aspects of our product of data science and engineering principles just to kind of tweak my presentation so that folks understand it.

[00:22:14] Thinking about human attention. I know folks typically only remember 1 to 2 things you say, So You know, it’s sad, but it’s the reality of the world. And so how do you make what they remember the most important parts? And how do you ensure that they remember it? Of course, thinking through the various different ways people consume information.

[00:22:34] Some people like to just listen to it. Some people like to see things. Others like to read. And so when I’m thinking about building out these presentations to individuals, how can I? Leverage multiple forms of communication to iterate my point. Some folks are, they shy away from repeating things, but I think if you repeat the right things in meaningful ways, in various forms of [00:23:00] communication, it goes a long way for people understanding and keeping the point that you want them to remember.

[00:23:07] Luke: I agree. Totally. I mean, I think, communication, like good communications kind of like become a lost art with all this text base, you know, and, and you’re totally right about that. Like, if you can meet people where they are, right? Like, and, and, you know, and I think a lot of what you mentioned just kind of like totally fits that fits that rhetoric of like, you know, if you can communicate effectively and meet people where they are, you can really, you know, people, are naturally motivated to do things, right?

[00:23:32] It’s just a matter of them kind of. Being on that same page with you, so I think it totally makes a lot of sense and kind of a similar note around, like, you know, communication more broadly, though, I think, you know, there’s this is kind of currently the big hype cycle, right? Like, when people are looking at tech and in that kind of space, you know, there’s a lot of.

[00:23:52] Different influences and whatnot. But like, when you’re looking at, at emerging technologies, you know, how, how are you kind of differentiating between [00:24:00] hype and, and what’s actually meaningful as far as advancements go?

[00:24:04] Sophia: As a product manager, this has been a very important skill set because, you know, you look at the AI and analytics space today and you’re right.

[00:24:13] There is so much hype. We’ve had so many exciting advancements in the last few years, but. We’re also getting to the trough of disillusionments in some of these different areas. You know, some of our expectations have been falling flat. Whereas, on the other hand, people are starting to succeed for me personally.

[00:24:33] What is the most meaningful advancements are the ones that are solving tangible problems? You know, are we making things easier for individuals? Are we? You know, perhaps giving them some time back that they can use in a more meaningful way. I’ve seen a lot of things just be hyped up as this is such a cool thing.

[00:24:55] But being cool really isn’t just a market position. It’s not sustainable. [00:25:00] You’re going to lose attention soon. So you really need to be focusing on what solves the problem. Problems and when I’ve seen some successful organizations do is again, think about this problem solving mindset. What are we doing better?

[00:25:13] What is the purpose of this? And perhaps taking some of these advancements and trying to apply them to the problems in more of an experiment before. Investing more heavily and then some frustrating things I’ve seen and heard or when, you know, leadership teams come down and say, you have to use a large language model, but then not give a use case or what they’re trying to use it for and just kind of letting people figure out how are we going to use a large language model?

[00:25:42] Of course, that’s, that’s very frustrating. That’s like being handed a hammer and told go build something versus a, you know, Here is the architecture diagram for a house we want to build. What tools you need. So for me, it’s really looking at, is there an actual problem that [00:26:00] this item is solving?

[00:26:02] Luke: Yeah, I think it makes a lot of sense.

[00:26:03] And especially now, too, you know, when people are seeing, hey, there’s a real cost to maybe, maybe using this for everything isn’t necessarily the best move now that we’ve been in it for a little bit. So I think that’s a really smart way of looking at it. There is so much information out there these days too, and I would imagine even more so in your situation because you’re dealing with, you know, technical products, usability, all these things.

[00:26:27] How are you staying updated kind of on the advancements of AI and for folks that are listening who might be, maybe they’re kind of new and are wanting to kind of carve out a new path for their own, like any recommendations around what areas would be good to look in for resources and things like that?

[00:26:43] Sophia: Yeah, I mean, I definitely can iterate. It feels very overwhelming, especially depending on the area that you’re in and how much people are hyped about different things. I find that I might have a little interesting approach is that I do most of my learnings by talking to [00:27:00] practitioners by talking to people who are actually doing things.

[00:27:02] So I go to conferences or events. Sometimes I’m just chatting with the customers or I’m chatting with other folks at SAS, but I am listening and. Seeing what they’re actually doing, what they’re actually building and developing. Something interesting that I’ve learned, having. Been in the space for a few years is that a lot of these builders, doers and practitioners, they love what they do.

[00:27:27] They’re very passionate about it. And if you show genuine interest in what they’re doing, they’re going to spill it off. They’re going to tell you all about everything that they’re working on. And this for me has been just a really great way to really differentiate. Well, what is actually feasible and what are people actually doing versus what What is just hype?

[00:27:46] Because if it’s just hype people aren’t really building things out of it. And so I know it’s very much of like an extrovert kind of approach to the situation of just going out and talking to people, but of course, you know, keeping up with them, following [00:28:00] them on LinkedIn, just reading through what they post, the blogs, the sessions that they do, because, you know, having tangible generation of assets of development has been, you know, it’s just a proof point that,You’re figuring out what you’re talking about and that you’re doing things.

[00:28:17] And so I, I generally just follow them after I meet them, but I know it’s not like I’m not following a ton of like high level creators. It’s mostly just, these natural touch points with individuals. I will say if you are very interested in learning more about responsible AI and the consequences of AI, I highly recommend Kathy O’Neill’s book, weapons of math.

[00:28:40] M A T H destruction. It is just a fantastic primer on understanding the risks of where analytics can go wrong, where the practitioners may have had good intentions, but they just simply didn’t think through the whole process. And what were the consequences of that? So definitely a primer that [00:29:00] I recommend for folks in the, Who are interested in a responsible AI and data science.

[00:29:05] Luke: It’s awesome. No, I think a big takeaway from this is like, communicate with people, you know, like, get out there and talk to people, right? I know we covered a lot. Is there anything we didn’t cover that you think our audience might be interested in?

[00:29:16] Sophia: I mean, if I were to share, you know, anything again, it’s just, That understanding of, you know, what can go wrong whenever I is used irresponsibly and just kind of understanding that, you know, your usage is all around us. It’s going to affect decisions in our lives, even when we don’t recognize it. And we don’t know it’s there, but irresponsible.

[00:29:38] You say of it. If it’s you who are doing it responsibly, you have to be aware there’s financial risk. You know, if your model’s making the wrong decision and you make a business decision based upon that, you could have lost revenue. You could increase your costs, just because your model is wrong. Of course, that you can get into reputational risk and reputational risk can, of course, affect your downstream, affect your buyers.

[00:29:59] [00:30:00] If they realize that you’re, Doing things willy nilly with people’s data. They don’t like that. They realize that you are treating groups differently. You might have a model model bias. so there is, you know, financial impacts, but I can’t have people forget that there is also a lot of times human impacts.

[00:30:19] To the irresponsible use of A. I. Of course, it’s very specific to the industry and the use case that you’re in. You know, someone who’s working in a health care model. Well, a wrong prediction could lead to an individual not getting the treatment that they need. I know a lot of folks in the health care space are very cognizant of it.

[00:30:37] You can also think about, you know, situations that you’re in. Might not be as clear. You know, if you deny someone alone, who would have paid you back? Are you causing financial harm to this individual? Are you making things harder? So I always want folks to be aware of the risks, not to be doom and gloom, but rather so that we are acting more responsibly so that we are [00:31:00] mitigating the risks and that we can, you know, carry forward the work A little bit better.

[00:31:06] Luke: No, it’s fantastic. No, I think it’s really great note to end on there. Where can people follow your work? Are you out there like on socials or any pointers you can give folks?

[00:31:16] Sophia: Yeah, I do have everything I work on coming through my LinkedIn. So every time I do a session, a talk, write a new blog or article, I’m always posting it on my LinkedIn.

[00:31:29] So I would definitely. Love if more folks followed and connected on LinkedIn, just because that’s kind of like my centralized resource for everything that I’m working on.

[00:31:38] Luke: Fantastic. We’ll be sure to include that in the show notes. Yeah. Sophia, I really, really appreciate you dropping by and really enjoyed the discussion.

[00:31:46] And I have a feeling our audience is going to really appreciate it as well. So thanks so much and love to have you back too, to, you know, get updates, see how things are developing on your side of the world.

[00:31:55] Sophia: Of course. And I’m happy if you are interested in, you know, Any other [00:32:00] folks at SAS, I’m happy to make some connections as well.

[00:32:02] We’ve got our data ethics practice that I work alongside with. They’re, they’re such a fantastic group of individuals who always love to share what they’re doing as well.

[00:32:11] Luke: Awesome. Well, thank you so much. This is really great. Really appreciate it.

[00:32:15] Sophia: Yeah. Thank you for having me.

[00:32:18] Luke: Thanks for listening to the Brave Technologist podcast.

[00:32:20] To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com and start using Brave search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • The role of analytics in shaping the future of MLOps
  • The challenges organizations face in operationalizing machine learning models
  • The critical role of analytics in this process
  • The integration of AI and analytics at SAS
  • The importance of data quality and governance in MLOps
  • The future of MLOps and how analytics will continue to evolve

Guest List

The amazing cast and crew:

  • Sophia Rowland - Senior Product Manager

    Sophia Rowland is a Senior Product Manager focusing on ModelOps and ML Ops at SAS. In her previous role as a data scientist, Sophia worked with dozens of organizations to solve a variety of problems using analytics. As an active speaker and writer, Sophia has spoken at events like the AI Summit, All Things Open, SAS Explore, and SAS Innovate; she has also written dozens of articles and blog posts. As a lifelong North Carolinian, Sophia holds degrees from both UNC-Chapel Hill and Duke, including bachelor’s degrees in computer science and psychology, and a Master of Science in Quantitative Management: Business Analytics from the Fuqua School of Business. Outside of work, Sophia enjoys reading an eclectic assortment of books, hiking throughout North Carolina, and trying to stay upright while ice skating.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.