Back to episodes

Episode 42

Democratizing AI Through Open-Source Development

Jeff Boudier, Head of Product and Growth at Hugging Face, discusses the evolving landscape of AI and the role open-source development plays in its transformation. We also explore the challenges and opportunities that organizations encounter when embracing AI, especially with open-source solutions.

Transcript

[00:00:00] You’re listening to a new episode of The Brave Technologist, and this one features Jeff Boudier, who is the head of product and growth at Hugging Face, the number one open platform for AI builders. Previously, Jeff was a co founder of StupaFlix, acquired by GoPro, where he served as director of product management, product marketing, business development, and corporate development.

[00:00:20] In this episode, we discussed why it’s important to democratize AI and adopt open source development and how they’re making it easier to do so at HuggingFaith. He also shared advice for leaders building AI projects and ethical dilemmas that arise from advancement and emerging tech. Now for this week’s episode of the Brave Technologist.

[00:00:37] Jeff, welcome to the brave technologist. How are you doing today? I’m doing great. Thanks. Thanks for having me. Yeah. Yeah. I know I’ve been really looking forward to having you on. so, why don’t we just kind of start off, like, can you help us set the table a bit, with the audience? can you tell us how you ended up doing what you’re doing with Huggy Face, like, you know, a little bit on your background?

[00:00:57] Oh, sure. Uh, so I joined, Hugging [00:01:00] Face four years ago, and back then, Hugging Face was sort of the, um, primary toolbox for researchers and data scientists who wanted to use, these natural language processing models to do things like classifying text, doing a little bit of summarization, completing sentences, et cetera.

[00:01:20] And it was very much still, very niche sort of geeky field. but Hugging Face was the way you could access those models. You could actually build applications. The Transformers open source library was already there, et cetera. and so I was so excited to jump into Hugging Face and being sort of the first customer facing business oriented person.

[00:01:41] at the company, as the open source was growing, as the hub was in its first stages of development and how I got to that, well in my previous startup, and that takes us back like 10 years ago, uh, I had the, uh, the privilege of working with, uh, Julien Chaumont, [00:02:00] who’s a CTO and co founder. Of Hugging Face who introduced me to Clem Delong, our CEO.

[00:02:06] And when that company got acquired eight years ago, by GoPro, I stayed on that GoPro founded Hugging Face, uh, which is now over eight years old. So that’s how I got to, uh, join them in this adventure. That’s fantastic. Yeah. Um, so, uh, Huggy Face has kind of become a key player in the AI ecosystem, you know, and they’re, they’re one where they, they really stand out, especially like from the open source community.

[00:02:30] And when you’ve got these, like, giants, like, you know, open AI and all these other ones out there, Huggy Face has just been right there, kind of like pacing, you You know, right alongside them. how, how would you describe kind of HuggingFace’s, platforms mission and, and what sets it apart really from some of these other platforms that are getting well known?

[00:02:47] Yeah, our, our mission at HuggingFace is to democratize good machine learning. And by good machine learning, what we mean is, machine learning that is, built, from the community. So [00:03:00] community driven that is built from open source. And that is built from ethics first principles. Why don’t we dive into a little bit more about, um, HuggingFace’s open source approach.

[00:03:11] Are you guys like, uh, is it kind of a distributed organization and, and, you know, you have a lot of community volunteers too that, that chip in or what, and, you know, how is this kind of important to how we really democratize the AI, development? Well, to, uh, to accomplish our mission, right. To, to democratize good machine learning.

[00:03:30] We’re approaching it through different, angles. one is open science. So we have a fantastic science team that builds in the open, uh, new models, new ways to build models. We approach it through open source. So our open source team, builds, A toolbox of open source libraries so that you can do anything from training models to evaluating models to deploying models all kinds of different ways.

[00:03:57] It goes from transformers to diffusers to [00:04:00] accelerate to optimum to peft to TRL and so many more. and then another way that we. democratize good machine learning is by building products and services that make it super easy for the community, for the world at large to access and use open models and those open source libraries.

[00:04:18] And so that’s the Hugging Face Hub, which is how the AI community today shares and collaborates around models, around data sets, around applications. And that’s all of our commercial products and services. That’s fantastic. does the, um, community work a lot with the academic community too? one thing I’ve noticed, I’ve interviewed a lot of people in this space recently over the past several months, and, it really seems like the academic community at large is kind of getting really involved, on the AI side of things.

[00:04:44] Has that been the case with, Hugging Face or, um, you know, any, any contacts you want to provide? Yes. In many ways. so of course, our open science team, is, engaging many, many different collaborations with various, academic university research teams, across the world. a [00:05:00] great example of that was the big science and, uh, thereafter the big code.

[00:05:05] Projects, which were open science collaborations, uh, grouping over a thousand, uh, researchers from all kinds of universities, from all kinds of companies around a common goal to build in the open, large language models, or in the case of big code, code generation models and data sets and publish every step of the process, in the open.

[00:05:30] Another way that we interact a lot with the academic community is that they use HuggingFace to share their work, right? So if you have a novel method to create a new kind of model, you write a paper. that paper is going to be, published on HuggingFace and the authors of the papers are going to be tagged, uh, with their user, their HuggingFace user accounts, and then you’re going to.

[00:05:55] Probably be publishing model checkpoints. And those are model repositories on hiding [00:06:00] face. Maybe the data sets will also be published and you may publish a demo of that model. And that’s what hugging face spaces does. So at every step of the process of sharing, the scientific work, uh, work, Uh, with the community sort of hugging face has the right set of tools.

[00:06:19] That’s fantastic. I love it. so, uh, with the rise of open source AI platforms, you know, influencing innovation, like, how, how, does that compare? Just opposing that against kind of seeing like, okay, you’ve got like a co pilot and an open AI and all of these things are getting integrated with like these big tech players.

[00:06:35] Right? And, you know, admittedly, people are kind of alarmed around the potential influence. They’re like, how does hugging face do with this open source model are they still able to kind of like influence a very big way from your point of view? Or, you know, how do you see the open source community kind of helping to influence like as a kind of counterbalance to the closed source?

[00:06:52] Good. Yes. well, first of all, I think, uh, to, to level set, uh, you know, AI is a [00:07:00] field, that is really driven by science. And the way that science works is by having scientists sort of share their discoveries and build upon each other’s work. And that’s how science works. That’s how. AI has always worked.

[00:07:15] And so all those big tech closed source models, you were mentioning, they’re all built upon the shoulders of all the open source and open science work that has been done before them, right? But I think what’s interesting is that, uh, over the past year, the story has been, uh, the open model capabilities catching up with closed model capabilities.

[00:07:39] And there’s a. An influential member of our community, Maxine, Maxime Labonne, who has a great graph that shows you the respective performance of the best open model at the time, along with the best performance of the best closed model at the time. And what we see is that not only is [00:08:00] open source catching up, but it’s actually accelerating, the rate at which it’s.

[00:08:04] Caught up. And so today you have open large language models that are similar or better than some of the best closed models available. And we see the same trend happening in other modalities too, because, um, AI generative AI is not just about generating text. You can generate images, videos, 3d objects, what have you.

[00:08:25] And in all these other fields, we seea similar trend. Okay. So a good example of that is for image generation, the process of diffusion to go from a text prompt to an image. The latest model and super popular right now in a hugging phase is Flux. And Flux is an open source model. It leaps and bounds better than stable diffusion was just a couple years ago.

[00:08:48] so it shows you the rate at which the open source community is catching up with the best possible closed models. That’s fantastic. And I think I think people love hearing that too, especially like from our community. Everything’s being so open [00:09:00] source. And that kind of leads me to the next question, too, which is like, you know, I don’t know, I’ll say a million dollar question, but a bunch larger, like, potentially, like, what’s the monetization model for Hugging Face?

[00:09:10] You know, how does the company make money? Like, and how do you see company kind of, you monetization model as things continue to grow? Well, thanks for asking. Uh, hopefully it’s more than a million dollar question. , many, hopefully multiples than multiples, right? ? It really is. Um, yes, no, we, um, uh, we monetize, uh, uh, hiking phase.

[00:09:31] We, uh, we make money by, offering services, on collaboration, on compute and in support. And so for collaboration. for context, uh, so on the Hugging Face Hub today, and we just celebrated today, the million, free and accessible model, on, uh, on Hugging Face, we actually have, this is actually the, the visible part of the iceberg.

[00:09:56] we have today, so a million free and accessible models, [00:10:00] but there are also, just about as many private models that you don’t see because there are models that our users are keeping, keeping private so they can work on them for their business, et cetera. Like a private GitHub repo, right?

[00:10:13] Exactly. Okay, cool, cool. Exactly. And just like on GitHub, we also have organizations, the concept of teams, right? So people are working together privately, collaborating around models, data sets, applications on HuggingFace and over 100, 000 of these organizations that have been created on HuggingFace. And so collaboration is something where we have this service called Enterprise Hub.

[00:10:39] Uh, which adds layers of security and control over how you work with your colleagues privately on the hub. Things like single sign on, making sure that everybody within your organization is actually an employee of the company, not somebody who’s left and working somewhere else, a way to assign like different roles to different people so they can access [00:11:00] this and that, but not that.

[00:11:01] and many, many different other features. So that’s enterprise hub and has been super successful. I mentioned compute. so on hugging phase, you can found, uh, the, the million, free and accessible models, and you can do anything from text generation for a chat bot to creating images, creating videos, transcribing, Speech into texts and then translating it into another language, et cetera.

[00:11:24] and if you want to apply any of those models to your use case, then we have compute options. And maybe the most prominent of that is a service we called inference. Endpoints where you can take any one of those models or any one of your private models and then puts infrastructure underneath so that you can use it as an API within your applications.

[00:11:48] And you can select AWS or Azure or Google cloud and this type of GPU, a 100 H 100, what have you. so that’s inference endpoints and it’s a way through which we make money through the [00:12:00] usage of hugging face when, users are ready to go to production with their models. And then I mentioned support as the third thing.

[00:12:09] So we really work hard to make it as easy for AI builders to build their own AI with open source, um, as they would if they were to just hit an API, a closed model. OpenAI, Anthropic, what have you. And, uh, this being said, like for companies to adapt the models to their own data, be it by fine tuning, or, other methods, uh, it does require a little bit of more work.

[00:12:39] Right. And for a lot of companies, they may be a little bit overwhelmed with the rate of new models coming to the market and how they can apply that to their use case, in their environment. And so that’s where our expert support service comes into play, where our team of machine learning engineers, is, uh, helping day to day guiding, our [00:13:00] customers as they build their own AI with open source and open models.

[00:13:05] That’s fantastic. I mean, I think it’s a great, great breakout to have the different areas. And I think like, first off, like congrats on the million number. They said huge. Like, I think people don’t realize the scale at which you guys are already, you know, executing, right. And you have a hundred thousand organizations, if I’ve heard you correctly.

[00:13:19] Right. Like that’s, these are great numbers. I mean, like, I think, you know, it’s refreshing to hear that because the one thing we are hearing a lot more and more is just like people kind of adapting more of the open source culture with things, because, you know, once you start to see how The organizations can function.

[00:13:34] You see how this is like really pretty, like a healthy way to build things. Like, and, and, and the, the, the support piece too. I mean, I think, you know, it’s one of those things that often gets forgotten, um, in closed source, companies and environments is just like how critical that support is. And it’s just kind of organically, you know, there with, with open source.

[00:13:50] So it’s awesome to hear that you all are, it’s a big branch of what you all are doing there. I think, you know, what, what are, what are some of the big challenges that are, are, you know, in building some [00:14:00] of these AI driven products. Well, one that I, uh, mentioned earlier, right, is the challenge of going from an experiment, to a production system that you put in the hands of your customers at scale.

[00:14:13] and that is, definitely a challenge for companies. I think, you know, you were just talking about, how today a lot of companies are relying on closed source companies to build their AI. I think the moment that we’re in now is kind of an anomaly. it’s an anomaly in that is, uh, that stems from the fact that a year and a half, two years ago, everybody got excited by ChatGPT, uh, all the companies scramble to figure out like, okay, what’s our ChatGPT strategy?

[00:14:45] What’s our Gen AI strategy? And they started frantically. Experimenting and building proof of concepts. And the easiest way to do that is to just hit an A. P. I. Right center prompt get response. And so that’s why today we’re in this [00:15:00] weird time where a lot of those prototype experiments that companies have built all built on closed source technologies.

[00:15:08] They have zero control on. Right. From one day to another, there’s like, the model is improved a little bit. It changed a little bit. All of a sudden it doesn’t want to answer your questions. like this wouldn’t normally happen this way. Like you wouldn’t normally build technology as a tech company, not controlling the underlying, layers.

[00:15:27] so I think the, the challenging aspects for all of these companies, like, how do I internalize, this technology? within my own capabilities. That means my compute. That means my cloud. That means my data. That means getting the models in house. And these are all things that open source can help you do.

[00:15:49] but you do need to figure it out. So that’s why we have these experts support service, that helps sort of companies, jump. over the gap. and that’s why we’re trying through our [00:16:00] products to make it as easy as possible to use, open models so that companies can really internalize, uh, this stuff, make it run where all their other software runs, uh, and have a more secure compliance, and can take control of their destiny.

[00:16:16] No, it’s fantastic. Like, are you all seeing like a broad range of different types of companies like coming in and working with, uh, with Hugging Face, I’m just kind of curious. Yeah, you know, it’s really all across the board. you know, I mentioned 100, 000 organizations. You have some of the biggest companies in the world, right?

[00:16:34] So you have, you have Google, you have Microsoft, you have Apple, you have all of these guys. but then you also have like, All kinds of startups and AI focused startups and companies in, domain sectors of the, of the economy that you wouldn’t really think of. we have, uh, maritime shipping companies who are building great stuff with AI on Hugging Face, right?

[00:16:58] so that sort of goes to [00:17:00] show that, AI is not sort of a niche thing that’s going to leave like on an app on your phone, from the big tech, uh, company experiences. It’s something that is going to bring better customer experiences, more productivity across the spectrum of the economy. There is not a sector of the economy that is not going to benefit from adding new features using AI.

[00:17:29] Yeah, no. And I think, you know, you totally captured it earlier where it was like, everybody was super excited about this and, and, and getting on the bandwagon, but then it’s like, okay, well, how do we integrate this into what we’re doing? And now you’re kind of, now you’ve got investors saying, Hey, where’s the product, right?

[00:17:43] Like, and so like, I think like having this open source way of doing it is Really awesome, because it actually really gives you that opportunity to like experiment, iterate try and find that market fit for what you’re trying to do. and it sounds like you guys are augmenting that really well with the, with the support side too, which is, is fantastic.

[00:17:59] [00:18:00] So given that you’re seeing a lot of different types of companies approaching you, from your experience, what leadership qualities are kind of essential for, for steering these AI projects and teams towards success? Well, I think, uh, one thing that’s really important is to be very pragmatic, because you can do so many things.

[00:18:20] With AI. And so you need to really be focused about what is the goal that you’re trying to accomplish? Like, how is this going to be adding value to my customer and, and having a very pragmatic approach about how you’re building these things. Um, so I think that’s probably the, the most important thing like, don’t get too excited, but, uh, really focus on the, the problem that you’re trying to solve, probably that’s going to mean that, uh, some new types of work that haven’t been really thought about before are going to be needed.

[00:18:55] if you want to create an optimized, customized model, uh, [00:19:00] that probably means that you need to find out like, where is the, the. Uh, data, the information that will make the model more relevant to you and prepare that in a way that you can improve, the models and make them yours and make them better.

[00:19:14] So, yeah, I think being pragmatic is, uh, super, super important. and keeping, keeping an eye on, on the pulse of things, right? Because. Uh, there are so many new capabilities, new modalities that are constantly added within the open and free models available in the hugging phase. So keeping an eye on this will allow you to see the opportunities that you can derive from them for your business.

[00:19:40] No, I think it’s great. I think it’s really, really sound advice to you. Like, uh, you know, it’s one of those things where people try to, like, you know, eat the whole watermelon in one bite. and it’s really easy with how fast everything comes out to kind of get 5 steps ahead of where you actually need to be to get from, you know, a to b first.

[00:19:55] And so the good points there. I mean, I think, you know, I think it’s great. You know, beyond kind of the [00:20:00] technical aspects, you know, really, really interested to hear your take on like, what are some of the ethical dilemmas that you, you anticipate arising or that you all are encountering with, right? Um, cause you’re, you’re on the pulse of it.

[00:20:10] And, I feel like a lot of these conversations kind of can spin off into these weird directions, around the ethical side of things and kind of get ahead of itself. But like from a real kind of. So to say like pragmatic for you, like, uh, what, what, what are some of the ethical, uh, dilemmas you’re seeing with around, you know, advancements and, and how can projects address them, you know, in, in a smart way and, and, uh, in, in, you know, yeah, like, uh, for just from your point of view.

[00:20:34] Definitely. Um, well, first off, you know, I, when I, uh, described our mission earlier, I said, uh, we mean good machine learning by community driven, open source, based and built from ethics first principle. So it’s something that’s very important to us. And I want to send lots of kudos to our amazing society and ethics team at Hugging Face, who does amazing work.

[00:20:56] one, uh, approach that’s, important for [00:21:00] us. Is that, we want to give the best tools and the best practices available to the community so they can implement, ethics AI, uh, within their own projects. So it’s tools like the dataset measurement tool, from Meg Mitchell and team, Dr. Mitchell and team, that allows you to see uh, your dataset through different filter.

[00:21:27] Um, we’re working on, energy efficiency rating of models, so you have a better sense of, what is the, uh, most sustainable, model that you can use for your use case, uh, saving energy and cost of compute. so lots of tools, lots of better. Best practices. If there’s something that’s interesting to you, the team has a very cool newsletter, where they sort of show all the all the latest and [00:22:00] greatest work from them and the community.

[00:22:02] on, ethical AI, you can find it at, uh, hf. co slash ethics. and maybe one more comment on this is that, um, I think it’s important, that, anybody who’s working. On AI projects really takes an approach of thinking about all the potential risks and misuses of the eventual feature that they want to build before.

[00:22:30] Before it’s actually built. That really helps, uh, in the process so that you can have some foresight. You can avoid, unfortunate situations of course, but also it makes the whole process, much easier so that it’s not, Sort of an afterthought, the day of launch. Oh my God. Have we thought about like what could happen?

[00:22:50] I mean, you know, it’s, it’s funny to talk about, but it’s like a very common occurrence because we’re also excited. We want to build things and it’s like all these amazing tools that we can use and all these [00:23:00] amazing things that we can build. but the, the best way to, um. prevent any like a gaff, uh, as we’ve seen, even from the biggest tech companies, when they release a project is really to use forethought and try to imagine and try to challenge yourself and the future, the future system is like, what could go wrong, right?

[00:23:23] Right. and then another thing is, what can significantly reduce potential risks is to use the right tool for the job, right? So if the feature that you’re building is, to say, to give a degree of urgency of some email that you receive, and probably you don’t need to have to use like a, uh, a model that could pass a PhD test, right?

[00:23:47] Like the biggest, uh, uh, GPTs that, that exists today, probably you’re better off like building a very specialized model that is really built to do that one thing super, super well, and even better than the giant [00:24:00] model. so that’s also a way that you can, uh, make sure that your system is sort of responsible by default because it’s really built for one thing.

[00:24:09] No, it’s great. And I absolutely love hearing that you guys have kind of like these, almost like these toolkits around the ethical side, because, you know, it is real. I mean, like a lot of these companies are, there’s all sorts of like pressure, whether it’s from investors or, you know, competition or whatever, to kind of ship first and, and so much of the culture with software is like ship first and ask questions later and a lot of things.

[00:24:31] But like from the ethical point, yeah, you really want to be kind of having that in the first principles of what you’re designing and as part of that early, early Process. And so it’s great to hear that you all have like, you know, tools that are available for companies. Because, you know, I know just from talking with a lot of different companies that people don’t really know the ethical stuff.

[00:24:49] They’re, they’re, they’re less attuned to than other things. Right? and so it’s just again, just kudos to the open source culture side of this, where you all are helping to kind of equip [00:25:00] and get these companies learning to as part of that process. Because, you know, Yeah, I think, uh, I think, it’s really critical.

[00:25:05] So, so kudos to that. I mean, I’m looking forward to like, what’s your outlook kind of on, on the future and, and web hugging face. And, are you optimistic in general kind of where AI is going in the next five or 10 years? I am, I am. I mean, I’m, I’m, uh, saving so much time every day, just using simple tools, uh, simple AI features.

[00:25:25] There are sprinkled within the apps that I use every day. I save so much time and I think it’s gonna, keep going this way. I’m excited about AI being useful. I’m excited about AI making me more productive and, and everyone else. and I think for this, there’s a little bit of a, a, a bias where, when we talk about ai, we, we tend to, Overestimate, the short term impact, right? Our lives will be completely different in one or two years. Nobody’s going to need a job anymore, [00:26:00] right? And we tend to underestimate how profound the changes are going to be over the longterm. 15 years. so I’m super excited and optimistic and I look at this, uh, with the long term view, as we do at Hagen Fees.

[00:26:17] we’re there for the long haul. that’s why monetization is so important for us. We want to be a. profitable, uh, sustainable, company. Uh, we want to become a public company down the line and not today, not ready, but that’s, uh, we’re, we’re in there for the long haul and we’re excited for the longterm.

[00:26:35] That’s fantastic. And I really appreciate you coming on and being so gracious with your time. But before we kind of sign off, like two things, like, uh, one, are there, is there anything on the horizon with Hugging Face, that you, you might want to, let the audience know about or keep an eye out for? and then, uh, two, was there anything that we didn’t really cover That you, you want to, um, let our community and audience know about.

[00:26:53] Well, thanks. Thanks for asking. Um, I hesitated to, to bring it up because it’s not quite there yet. It’s not quite [00:27:00] launched even at the time that, you’re going to put this out on the internet. but let me give you a little bit of a teaser. So, earlier you asked me, so what are the biggest challenges for companies today in adopting AI?

[00:27:12] And one of those challenges is that, taking a model and having it run really, really fast, really, really well on like compute infrastructure, is actually quite hard. You have to deal with a bunch of stuff that, uh, you probably haven’t come across. If you’re a data scientist, things like, uh, having to compile the models and quantization and using these data.

[00:27:35] Data type versus this other, the batch size, the sequence length, and all these configuration options that come into the picture. so we want to simplify all this and we want to give a really an easy zero configuration way of taking the most popular open model, maybe like 3. 2 from, from Meta, and have this work in a very efficient way on whatever infrastructure you want to put under it.

[00:27:59] So that’s [00:28:00] the new thing that’s coming up, probably a week after, this, uh, this podcast is released. and I’m going to tell you the name it’s going to, it’s going to be called hugs. Um, so, so keep an eye, keep an ear out, for hugs coming from a hugging face, very, very soon to simplify, the, the, uh, Deployment in production of open models.

[00:28:19] That’s fantastic. No, I’m sure the audience is going to love to hear about or keep an eye out for that. And, and, thanks for sharing that here too. Um, finally, um, we’re, we’re, uh, if people want to follow along with, um, with you and what you’re doing out there in the public or, or more broadly with Hugging Face too, where can they tune in?

[00:28:35] Well, I’m, um, I’m on Hugging Face. so if you go to huggingface. co slash, uh, Uh, Jeff Boudier, so J E F F B O U D I E R. then you will find me, you can follow me. You’ll get all my updates from HuggingFace and I’m, uh, under the same username at X and at LinkedIn. So Jeff Boudier over there too.

[00:28:55] Fantastic. Well, Jeff, really appreciate you coming on today. Thanks for sharing everything. I’d love to have you [00:29:00] back too, to kind of, uh, hear what else you all are coming up with in the months ahead. So, uh, thanks again for, for coming on. Really appreciate it. Thanks Luke. All right. Take care. Take care.

[00:29:10] Thanks for listening to the brave technologist podcast to never miss an episode. Make sure you hit follow in your podcast app. If you haven’t already made the switch to the brave browser, you can download it for free today at brave. com and start using brave search, which enables you to search the web privately.

[00:29:25] Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • Why it’s important to democratize AI and adopt open source development (and how Hugging Face is making it easier to do so)
  • Ways Hugging Face is collaborating with academic institutions worldwide through projects like Big Science and Big Code
  • Essential leadership qualities for managing AI projects, and the ethical dilemmas that come with responsible AI practices

Guest List

The amazing cast and crew:

  • Jeff Boudier - Head of Product and Growth

    Jeff Boudier is the Head of Product and Growth at Hugging Face, the #1 open platform for AI builders. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as Director of Product Management, Product Marketing, Business Development, and Corporate Development.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.