Artificial Intelligence: Is it the Key to Boosting Human Intelligence?
[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Maltz, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.
[00:00:29] You’re listening to a new episode of the brave technologist. This one features shell, who’s the head of data scientist strategy and evangelism at domino data lab. And this role, he advises organizations on scaling impact with AI. Previously, he covered AI as an industry analyst at Forrester research, and he’s currently the host of the data science leaders podcast.
[00:00:47] We think you’re going to enjoy this episode a lot. Some of the topics we discussed were how AI can help us actually boost human intelligence and what we’ve learned about our own limitations through advancements with technologies like generative [00:01:00] AI. We also talked about the way society is currently underestimating AI and the most underrated applications of its potential.
[00:01:06] And now, for this week’s episode of the Brave Technologist. Welcome to the Brave Technologist podcast. Why don’t we start off by getting a little background on what your involvement is around AI and, and what you and your team are building and where
[00:01:20] Dr Kjell: you guys are at. Oh, certainly. Well, I mean, if you go back far enough, you could go into the late nineties when I was dropped out of a computer science major program by taking a course on AI.
[00:01:29] And I said, I looked at the syllabus and it was. Neural networks and heuristics. And I took one look at that and said, yeah, nobody’s going to be using those. So the, the, the joke is entirely, entirely me. I went and became an economist, a statistician, applied statistician. Eventually sometime in about 10, 12 years ago, somebody made the questionable decision to give me my own data science team, started building AI applications using NLP technologies and computer vision technologies.
[00:01:58] Then end up becoming a [00:02:00] industry analyst at Forrester, where I was covering AI, computer vision, and speech analytics, and these strange things called transformer networks. And I was beating the drum about them non stop, and no one was listening. And then all of a sudden, ChatGPT happens, and it’s everywhere.
[00:02:16] So I like to say that I told you generative AI revolution. Currently, I’m the head of data science strategy over at Domino Data Lab. We’re a data science platform. We help the world’s really largest data science teams and often the most sophisticated data science organizations help operationalize artificial intelligence.
[00:02:36] We have customers who are. Pharmaceutical firms that are leveraging us to go and build generative models that help design new, uh, synthetic peptides for things like diabetes and heart disease, we have other customers who are leveraging us for computer vision, for building deep learning models for computer vision, for medical imaging, but also.
[00:02:57] If you’ve gotten an insurance quote [00:03:00] or, uh, you’ve applied for a loan, there’s a very high likelihood that that machine learning model on the backend that was assessing your credit worthiness was built with our platform. So it’s, it’s wonderful. It gives you a perspective into what these incredible, incredible organizations.
[00:03:16] Are doing with, with artificial intelligence, but it makes you sort of sad that we’re on, on the outside as the observers, we’re enabling this. We’re not actually building it ourselves, but having that impact by again, promoting this industry.
[00:03:29] Luke: No, it’s awesome. I mean, I think it’s really important for people to understand to just like, this isn’t something that just happened yesterday, right?
[00:03:35] Like, this is stuff that’s been work in process. There’s been cycles here. And, and just the spectrum of use that you just described is like, where people might not necessarily realize it’s stuff’s been in practice for quite a bit. The parlor tricks of, uh, of the current height cycle aren’t the only thing, right?
[00:03:49] Like, and it’s kind of one of the big things we’re trying to illustrate here. I mean, what do you see as kind of the biggest Challenge in driving transformational impact with with generate AI or, or the industry [00:04:00] in itself right
[00:04:00] now.
[00:04:01] Dr Kjell: Yeah, well, I mean, it’s a, it’s a challenge that we’ve actually been having for a really, really long time.
[00:04:05] When I was an industry analyst, it often felt that when I was talking to data science teams, it was a little bit like being a therapist. There was like. Problem after problem. It’s like they knew they had incredible technologies at their disposal. They knew that the data was out there. They knew that there were fantastic things that they could be doing, but actually getting it into production was a, was a nightmare.
[00:04:26] So really that problem of. Operationalizing and not just operationalizing a model on a one off basis. We can do that through sheer force of effort, but being able to do this at scale, being able to not just develop and deploy a given AI solution, but continuously improve that and then start developing a little army of all of these different applications throughout the organization.
[00:04:48] And that has been the, the, the real challenge for a lot of, uh, a lot of organizations. And we’re seeing the same thing on the generative AI side of things. Everybody is doing incredible experimentation, and then running [00:05:00] headlong into the… Oh, crap. Now this is even harder to go in and put into production than traditionally I models because they’re so much larger.
[00:05:07] Often, if you’re looking at the hyperscalers, a lot of those models are proprietary. They don’t, they don’t allow you to go in and download your own copies, go in and operationalize them on prem, or even sometimes go in and fine tune them. So that that transition from one of the experimentation phase to one where we’re actually going in and embedding these within our applications and processes throughout the organization, that’s like the I think the biggest challenge that either folks are facing or gonna face in about 2 to 3 months time on the general AI side of things.
[00:05:39] Because again, these models are giant compared to what we were doing, even in the computer vision space. In the computer vision space, those models were giant compared to the even earlier generation of machine learning models that we were using. So, fine tuning these things, even going in and operationalizing them.
[00:05:56] Previously, it was like the training was hard and required was very compute [00:06:00] intensive. But then, you know, once you put them in production, that part was fairly cheap. Generative AI models are flipping that. It still is much, much more expensive to fine tune or heaven forbid, train from scratch one of these things, but even going in and putting them into production and using them, especially using them to scale that we would like is expensive and difficult.
[00:06:20] There’s another challenge which is just on the very different challenge which is just the fact that we always anthropomorphize these technologies and there it’s often you know people think that okay well it’s artificial intelligence it must be like human intelligence that means we can go in and automate away the the human intelligence component to it.
[00:06:40] And there it’s, well, often we couldn’t do it. We don’t have it. The technology is not there yet to do that, even if we wanted to, but it’s often the, even if we could, the ROI isn’t there and it’s usually one of augmentation and leveraging human intelligence and artificial intelligence in new and creative ways where that leads to the biggest, biggest bang for your buck.[00:07:00]
[00:07:00] Luke: I’ve been seeing like a new types of challenges or I guess challenges is kind of a big word, right? Like there’s challenges and there’s concerns and there are all sorts of, I mean, especially when you get into production with these things, right? Like, I mean, I’ve been in operations for a long time and there’s always kind of, it’s kind of like when you have war planning where, you know, the plan works until the first bullet is fired, right?
[00:07:20] And then it’s all kind of goes away. How has that leap been? Going from kind of experimentation into operations and have you found new sets of concerns that you didn’t necessarily expect or just a different, different set of ballgame with challenges that have kind of arisen now that we actually have a lot more people using a lot of these things and a lot more attention on them.
[00:07:41] There’s a set of
[00:07:42] Dr Kjell: challenges which are, well, these are traditional challenges, they’ve just been turned out to 11. Uh, and those are the ones when it comes to, uh, the infrastructure that’s, that’s needed for it, or the, the fragmentation of the ecosystem that’s out there. There are new tools that are popping up left, right, and center.
[00:07:56] There are new ways of doing things. There, there are new capabilities that are new models that are [00:08:00] coming out on a practically daily, maybe even sometimes hourly basis. So, that’s enratcheted up the complexity of managing that ecosystem and orchestrating all of those different parts of the stack to get everything to work.
[00:08:11] So they’re those ones, but, you know, those are problems that we’ve always had, they’re just to a higher degree. Then there are the challenges, which are new. Part of this is that, well, we really didn’t have good ways of generating text, voice, languages, and images previously. So, there’s not like a whole bunch of, well, there, there are many use cases that we want to go after, that we can go after, that we’ve really never thought about how we would do them before.
[00:08:39] And that then requires a a rethink. What should we be using these for? How are we going to be using them? Now, that’s not to say that these generative AI models, we can’t use them for a lot of stuff that we do understand and they’re well defined use cases. Yes, we totally can do that as well. But everybody wants to go after these more generative use cases.
[00:08:57] Where we are trying to drive a [00:09:00] conversation with a user in ways that we’ve never done before. And that’s the problem. We’ve never done them before. And so figuring out what an, what an automated, properly automated conversation without a whole lot of boundaries around it is, that’s really difficult. The other big thing here is one of, uh, just the testing and monitoring of these.
[00:09:19] Right? Our traditional ways of monitoring models for things like bias and drift. They don’t really apply for a lot of these use cases. Like, I mean, how do you quantitatively ensure that your chatbot isn’t gone and said something racist? Well, okay, well, now we need to go and have, maybe we have a dictionary and we’ve got a bunch of rules and other things that we can, we can go in and run against this.
[00:09:42] Maybe we run this against another model. Which is going in and looking for and trying to detect racist text and trying to, to be, to place control on the first one, figuring out how we monitor these models, govern these models and test these models is something that we’re working out because that range of capabilities and outcomes that they [00:10:00] can deliver on is now just so much larger than previously when it was like.
[00:10:03] Give me a fixed score, a preventive score. And I was like, yeah, okay. I, I know what to do with that one. I know how to test that. I know how to, uh, yeah, in a way that I don’t when it’s now a free form conversation. And
[00:10:16] Luke: in this whole safety layer kind of concept is kind of pretty dynamic too, I would imagine, you know, especially with broader and broader scale adoption and all these different use cases that you’re talking about that are just kind of basically coming up daily.
[00:10:27] What do you think, like, as now, as more and more people are using this stuff, like, what do you think generative AI tells us about human intelligence?
[00:10:33] Dr Kjell: Oh, yes, because often you’ll have folks saying, okay, great, these things are so powerful. Maybe we now have a route towards artificial general intelligence, you know, to get us to what we’re seeing in, in movies and science fiction books and things like that.
[00:10:47] And personally, I find that it actually goes in the opposite direction. It doesn’t tell us about how incredibly smart the models are. They do incredible things, but it’s not because they’re smart. Instead, what it does is it provides this really quite [00:11:00] depressing mirror towards ourselves. Well, if I can generate this human like behavior with a very deterministic, very clearly dumb model, what does that mean that we’re doing most of the time?
[00:11:12] Most of the time, we’re very likely on autopilot, functioning as, you know, next best action prediction models, which is what these models are. And we’re not using a whole lot of our human intelligence. It just, You know, we, it looks like human intelligence, but actually it is the result of some fairly basic and often highly flawed processes.
[00:11:30] So, you know, when we, we complain about these models hallucinating, what is that? You know, when you think about it, that actually, that’s what we’re doing all the time. We’re making stuff up and getting stuff wrong all the time and not realizing it. And that’s what the model is doing as well. So it’s fascinating what the, the implications of these developments are for things like neuroscience and for understanding ourselves.
[00:11:51] The worrisome thing is that it’s, it’s really reflecting that, well, either we’re not very intelligent or we’re just not using that intelligence. A lot of the time, [00:12:00] the good part is that it prompts us for this. We can use these models to go in and circumvent and, and highlight and make us aware of our, of the flaws in our own reasoning, which is very promising, not to mention the fact that obviously these models.
[00:12:15] can do things that we can’t, right? I mean, they can go in and pull information from any source and synthesize and apply that data and search across all of that data in ways that we can’t. So, there is this wonderful opportunity to boost our own human intelligence using these. But as I said before, it’s, we’re not at risk of being replaced by these things.
[00:12:34] It’s more of that old, you know, HBR adage about, you know, AI isn’t going to replace managers, but managers who leverage AR are going to replace managers who don’t, and, you know, you can pretty much substitute managers with like content creators or customer service representatives or marketers and you’ll, you know, the statement pretty much still holds.
[00:12:53] Luke: Yeah, it’s fascinating. I mean, it’s like it’s part funhouse mirror, but also part everybody says like 10x engineer or whatever, right? [00:13:00] Like where, oh, this thing can, you know, superpowers or whatever. And to what you’re doing in development or machine readable things, right? Like and stuff like that. But what do you think AI has really been the most successful so far?
[00:13:11] And where do you see kind of the current big gaps
[00:13:14] Dr Kjell: in capabilities? Yeah, what these technologies have done at a conceptual level is that it’s really. made feasible the, the analysis and generation of unstructured data. And so that’s, you know, text, voice, images, video, et cetera. And you know, why is that important?
[00:13:31] Well, it’s because the world that we live in is an unstructured data world, right? We, we live in a world of, we communicate through food, voice, and text. We, we generate these things. We’ve. Being forced to when we interact with technology to really circumscribe this and structure it in very artificial ways in order to be able to leverage any form of technology.
[00:13:52] And all of a sudden that’s been flipped on its head. So that ability to now go in and be able to interact with [00:14:00] technology and for technology to just understand and be able to be useful in the world that we live in. Has been completely blown wide open the generative part of it where it’s actually speaking or actually generating text or actually generating images That’s it’s really cool But it’s just the tip of the iceberg and again, we don’t really know yet how to use all of those things but the Being able to go in and analyze all of that unstructured data and make all of that useful.
[00:14:27] So now all of a sudden, you know, if we had all these corpuses of text in, in organizations that we couldn’t really do anything with, and now all of a sudden those are now useful assets in a way that way that we’re warned. Going in and being a researcher in, in, in like in pharmaceuticals and healthcare.
[00:14:44] Now, all of a sudden, you know, I’m not limited by my very, very limited ability to go in and find research reports and, uh, read through all of them. And then, and hope that I will happen upon, you know, the, the right insights out of that now, all of a sudden, those can be [00:15:00] analyzed at scale and the insights out of those and trends out of those can be distilled and.
[00:15:03] So really, as a tool for analyzing the data that we all generate and that’s of our world around us, that has been, we’ve come really, really far with. The ability to go in and create new technology that participates in that world, that we now have a way towards, but we’re still trying to figure it out. But then there are areas where we all of a sudden fall off a cliff and, you know, things like logic, reasoning, how all of that works, we aren’t there yet on the machine learning side, there are folks who’ve done research in it, and we just haven’t cracked it.
[00:15:36] In the same way that we, we clearly stumbled upon these attention mechanisms with transformer networks that just turned out to unlock all sorts of human like abilities in these machines. We need to stumble on a similar thing when it comes to figuring out, to adding like reasoning capabilities to these, these kind of models.
[00:15:54] And with your
[00:15:55] Luke: kind of like lens into the operational side of this, and you know, you’ve been talking about covering things [00:16:00] from finance to health care, et cetera, like, obviously, now it’s kind of a different climate too. And you’ve got things like GDPR and a lot of more everyday folks and people and businesses kind of inputting data into these into AI.
[00:16:12] How much of that concerns you with what you guys might catch on the other side of that, with what’s getting brought into these on, on the data side and how much safeguarding are you seeing in general from where you’re going around? Oh, wow. I can’t believe that we’re getting sensitive information coming in.
[00:16:27] Are you seeing like a serious effort kind of in this space around safeguarding some of this stuff? I mean, I think these are things that, you know, more and more people are going to be kind of curious about as more and more people, I mean, like we have a browser, right? Like when we’re introducing a prompt into the browser.
[00:16:40] And so we’re getting these concerns from people on a daily basis and kind of trying to safeguard against them. But I mean, like, what, what do you see from, from your lens? I mean, I’m just curious, like, given your experience in the space, like in the length of experience too, and how, you know, these, this regulation and these other things kind of play into all of this.
[00:16:55] Dr Kjell: Oh, yes. It’s a massive concern and a massive hot issue right [00:17:00] now. I think probably every conversation I have with the CDO is one of, of a panic as to how do, how do, how do I ensure that my organization is not going and entering sensitive PII into, into chat GPT. There’s a scramble right now to figure out a way in which you can enable that within the company within a, in a, in a secure fashion, because this is a little bit.
[00:17:21] Like Google, if you, if you were to block access to Google in your organization, people are going to start using it on their phones and their own personal devices and other things. And you’re going to end up with the same problem. And if you block chat GPT, that’s, you know, that people are going to find ways, uh, find ways around it to leverage these models.
[00:17:36] So right now there’s this big scramble to go in and how can we go and deploy these models in a, in a governed, secure fashion that won’t run afoul of regulators, but, and also won’t get us into trouble with the. with the public. That is happening and it’s, it’s, it’s a very real concern. It’s one where the, we do have, we’ve got plenty of tools to deal with it.
[00:17:56] And to what degree is that going to be an issue long term? It’s more of a [00:18:00] headache, but it’s one of, okay, well, how do we comply with different data privacy regulations in multiple countries? And there are ways in which, you know, you, you start building. Hybrid cloud platforms that enable you to go in and leverage the data where it is and build and train your models on those or maybe even leverage federated learning to go in and train models in different places and merge those merge those later on.
[00:18:20] So that’s happening. So the data privacy issue. I mean, we will certainly have folks who misbehave. It’s difficult to prevent folks from like clear view where, you know, if you’re intentionally going out and abusing people’s private data, then it’s difficult to stop that except through with regulation and penalties and fines and things like that.
[00:18:42] But the more concerning trend for me is that. Folks aren’t thinking about how we build the sort of industrial grade governance processes around building and maintaining these models. So everybody seems to automatically jump to that, well, we need to promulgate ethical AI principles and we need to create an AI [00:19:00] ethics board.
[00:19:00] And, you know, that’s nice, but at the end of the day, you can have all of those things, but if you don’t know, you know, what prediction was made to a person, let’s say on credit, And you can’t track that back to a model. And you can’t track that back to the code that generated that model. And you can’t track that back to the data that created it.
[00:19:16] What good are those principles? Because you’ve got no way to go in and put in practice those principles. The thing that worries me is that we’re all worried about the, you know, Skynets of the world and this anthropomorphized view of AI and not worried enough about the mechanics of it. Those best practices for going in and governing not just the model, but that whole life cycle.
[00:19:40] Now, I mean, I’m cautiously optimistic because, you know, we do have plenty of organizations and regulated industries who’ve been working on this for a really, really long time. And we’ve got tools that, that build it. This is a key reason why folks come and use our platform, is because you can, you can have that governing and monitoring around it.
[00:19:56] It’s just worrisome to me that so much of the rhetoric is around [00:20:00] the, the whole preventing AI from behaving like an unethical person. And, you know, Those principles don’t work terribly well in terms of even stopping humans from behaving like unethical humans, let alone how we’re going to apply this technology.
[00:20:14] It’s almost like
[00:20:15] Luke: sensationalizing, right? Like a bit, you know, or hyperbole or whatever. And part of this kind of sounds like similar to, you know, operational security measures that a company would take, where it’s like, look like… Don’t click on suspicious links. Don’t input stuff that you would want no one else in the company out or outside of the company to know.
[00:20:31] Yeah. I said practical matters, right? It seems like we’re kind of in a phase where people just, as these tools get into more people’s hands, educational kind of processes that people can follow around, you know, best practices, et cetera. No, that’s super, super
[00:20:42] Dr Kjell: interesting. It is nice to see how much it’s progressed, at least, at least on the European side of things.
[00:20:48] I mean, looking at the most recently formulated AI regulations that are coming out and comparing those to the ones that were proposed like two to three years ago. And you just see a world of evolution, a world of learning amongst the folks who are [00:21:00] crafting these. And we need to do that here on the U S as well.
[00:21:04] I mean, you know, just, just recently the white house has yet another tranche of tech companies, which are signing up for voluntary safeguards. Or voluntary commitments. And you look at them and you’re like, well, these aren’t commitments and they’re not safeguards and they’re not actually really going after the right things, but at least they’re starting on the process and I’m sure we will get there.
[00:21:23] Yeah. Yeah.
[00:21:24] Luke: Yeah. Just getting some of that attention. Sure. What ways do you think, you know, AI is kind of overestimated and underestimated by the general public?
[00:21:35] Dr Kjell: Well, it is one where over and over again, and it’s not just with generative AI, but all the earlier incarnations of things with deep learning, or even before we had commercially feasible deep learning, it was always in that weird area where These technologies are both more powerful and less powerful than you think, right?
[00:21:54] They’re, they’re less powerful for the things you think they can do. And they’re much more powerful for the things that you don’t realize that they can do. And so when it comes to [00:22:00] generative AI, it is one where ironically the generative capabilities of them are arguably. A little bit like the parlor trick, right?
[00:22:08] I can go in and and have a chapel like experience with it. That’s awesome. That’s really cool. But the fact that it knows it can and understand so much of what you’re writing and go in and aggregate that and go in and process that and analyze all of that is. actually where it’s super powerful and useful right now.
[00:22:28] So if you think about this, it’s like, well, sure, I could create a chatbot that interacts with all of my customers right now using generative AI and be able to talk in any essential conversation topic. And almost certainly somebody is going to go and figure out how to, a way to make it say something either racist or sexist or something else and get me in trouble.
[00:22:46] So I probably don’t want to do that. Or certainly we didn’t have a way to go before that. But the same technology can go in and look across every review of your product on Amazon, on walmart. com, on ebay. It can look at every social media [00:23:00] post out there at your company. It can go in and look at every transcript of every conversation that’s going on with your customer service professionals right now.
[00:23:06] And it can go in and across all of those, pull out, pull out the trends, pull out what people are most frustrated about. Pull out what people are saying about the competition. That’s incredible. That’s a, that’s a degree of intelligence that we’ve, we could never have leveraged before, but people still automatically gravitate towards that chatbot.
[00:23:25] That’s going to somehow go on and take over those, those interactions with everybody. So. That’s, I think, where the behaving like human beings, we’re overestimating its capabilities, but it can behave so much more powerfully than human beings when it comes to being able to analyze data. And that’s where people underestimate what we can be doing with it right now.
[00:23:45] Luke: Just hearing you say that, it seems like, you know, looking, you know, is one in startups, right? Like, uh, you’re always kind of on this quest to find product market fit. And these tools seem like it’d be extremely powerful for helping on that journey, right? Like what you’re, you’re constantly listening to feedback.
[00:23:59] You’re [00:24:00] constantly trying to see where the bugs are, where, where things, what the trends are and what you’re doing and all of that. And, and we’ve kind of elementary capabilities around that, but it seems like it’s really kind of getting supercharged by a lot of this technology and a lot of just the use that’s happening now.
[00:24:13] Let’s just say, uh, in a perfect world, like if you could have one really killer AI feature in your everyday life, like what would that be with what you’re seeing right
[00:24:22] Dr Kjell: now? No, uh, but ironically, it’s, it’s a feature to run AI applications. So when you’re thinking about an AI, especially a generative AI application, it’s a very complex set of different steps.
[00:24:35] which usually involve multiple different, uh, generative AI models that are happening along the way. And if you can orchestrate that, if you can manage and govern that process of, okay, getting the data in, running it through a bunch of models to create these features, numerical versions of the data that’s coming in, feeding that into a generative model, maybe like five or six times to go in and see, you know, what is the consensus around what the model thinks the appropriate answer should [00:25:00] be.
[00:25:00] Then taking those results and summarizing that consensus and taking action off of this. If you can string those together and build those, you have an AI application and can create, do almost incredible things, whether that be everything from automatically fighting parking tickets, which people have gone and built with these things, obviously, to one where it’s a learning tool that would go in and create and help you learn another language, right?
[00:25:26] Using one of these, one of these models. You have pretty much all of the building blocks that you need. So the tool that I would like is the tool that helps me build these AI applications and deploy them really, really quickly. And ironically, I don’t actually need AI involved in the creation of that tool or running in that tool.
[00:25:45] It can be fully deterministic. I just need a convenient way to stitch together all of those different, different components, the, the infrastructure, the, the, the models, the, the data and the code. Once we can create those kind of gen AI application development [00:26:00] platforms, that’s when the really cool stuff happens.
[00:26:01] That’s when we can, when the proliferation of generative AI applications takes off. I guess that’s not really answering your question, though.
[00:26:09] Luke: Well, and I mean, like, naturally, kind of my follow up with that would be like, how far off do you think we are from having those kind of, you know, dev tools or building blocks, like Lego blocks, to getting there?
[00:26:19] Like, do you think that’s something we might see in the next 12, 24 months? Or is it farther off than that?
[00:26:24] Dr Kjell: It’ll be a bit of a journey. So right now you have the foundations for a lot of things. You can build these yourself. It’s just a lot of manual effort. And I mean, obviously, as Domino, we, we do a lot of those things.
[00:26:36] I don’t think we can claim yet to do all of that, but we do very core components of that right now. And so I think it’s going to be a journey to get to one where it becomes really easy and becomes really easy for anybody to go in and not just train the model and use the model, but go in and build the whole application.
[00:26:51] And so I don’t, I don’t think it’s going to take very long. Right now, I think there probably are some startups out there and there might be some ones out there where if you’re [00:27:00] very, very constrained into what you’re trying to build and only use these very, very specific components to it, you probably can do it right now, but something which is more general purpose that, you know, suits the needs for the broader portfolio of generative applications that like a midsize company would like that.
[00:27:18] Yeah. Yeah, I would love to say six months. I think it’ll be a bit longer before it’s super easy, but there within six months, we certainly have the foundation in there. There are, there are, there are things that we can definitely make it easier. And that will definitely support a lot of these.
[00:27:34] Luke: Awesome. Yeah, I mean, and kind of on the vein to like, um, you know, open source obviously playing a role here.
[00:27:40] Like how much of an impact do you think that that’s having in the space in general now compared to, you know, previous cycles around this? Do you
[00:27:46] Dr Kjell: see? Yeah, it’s huge. It is. It’s absolutely huge. It’s what’s enabled the deep learning revolution. Circa 2012 onwards, it was, it was foundational [00:28:00] there, but it is even more so when it comes to, uh, on the generative AI side of things, there is nobody out there who stands for even more than a tiny, tiny sliver of the innovation that’s happening, and you are seeing folks on the open source side of things go in and replicate the corporations have been doing that Pouring, you know, hundreds of millions of dollars into within matters of weeks.
[00:28:25] If you haven’t read it, that, uh, leach Google memo about we have no motives is very, uh, illuminating here just on the degree to which there’s just so much innovation that’s happening that nobody really seems to have a differentiated, like a competitive advantage or a very sustainable competitive advantage yet.
[00:28:44] Luke: Yeah, it is. And then you even see people just in casual conversation saying, Oh yeah, you know, we’re, we’re not quite a three, a GPD 3. 5 yet, but we’re getting close, you know, when they’re talking about some of these open source applications around us.
[00:28:57] Dr Kjell: And the exciting things is that they’re doing it with models, which are a fraction of the [00:29:00] size.
[00:29:00] So like with GPD 3. 5 was like. 137 billion per parameters that are in this. So it’s, it’s a, it’s a large beefy model that really, you can’t really operationalize. And then somebody who comes along and does it with a model, which is like 5 billion parameters and like, wait, what? And is performing as well on maybe not all of the tasks, but certainly a lot of the tasks.
[00:29:20] And that then really changes the calculation in terms of what, what you can do.
[00:29:24] Luke: Yeah, and I think like that’s one thing that we’re looking at, too, is just kind of like we had a pretty naive local machine learning model for ad matching on a browser, right? Like that was really early going, but we’re starting to kind of play with more of these concepts to around, you know, using these models locally with local data and things like that.
[00:29:39] And it’s pretty awesome. I mean, I think just kind of seeing how. What’ll possibly be possible, you know, off of this as people, as we iterate and, and all of that. And, and just kinda the variation you have between, you know, some of these, uh, more cloud-based models and, and some of these local ones. You know, one thing we’re trying to do here, I, I like Braves audiences.
[00:29:55] Just a lot of developers, a lot of the users, a lot of people that are into new [00:30:00] technology and maybe they’ve been working, you know, in the web, but I haven’t dipped into ai. Deeply yet, but are interested. What favorite resources or people or brands or, you know, outlets, would you suggest somebody that’s new in the space kind of dig into, or that you could recommend
[00:30:14] Dr Kjell: around AI?
[00:30:16] Hmm, man, absolutely. I mean, at the end of the day, I don’t think there’s any substitute for quite literally just cracking open chat GPT, if you haven’t, and mid journey, if you haven’t. Right. I mean, it is. The, the single biggest thing that you can do in order to accelerate your journey on them is to start using them and they are, they are ludicrously simple to use.
[00:30:33] I mean, that there is a learning curve with mid journey just because you got to download Discord and you got to figure out why on earth am I interacting with this thing with a chat interface. But once you’re over that, then all of a sudden, because you get to see what everybody else is, is feeding to this model, the other, other ways that other folks are using, the learning curve is really, really, you learn extraordinarily rapidly what, uh, what these things are capable of.
[00:30:55] That being said, I pro I came across a very interesting application the other day [00:31:00] called Mindstone, which was application for, for learning, which leverages generative AI. So it helps go in and create a tailored learning program and feed you with, uh, with content on an ongoing basis to try and gamify and engage you in that learning process versus, you know, I want to learn something.
[00:31:20] I start something and then I completely drop off. And so, and I discovered that you can go in and create a learning module around generative AI. So you can leverage generative AI to help teach you about generative AI. And to me, that is gloriously. Self referential. Yeah. I would give it a, uh, give it a look there.
[00:31:38] Yeah.
[00:31:39] Luke: I haven’t heard of that one. That’s awesome. I mean, it’s, it’s almost an exhibit meme in there somewhere too, I think, but I think I, and kind of getting it out to, um, uh, you know, is there a, just, just, just for fun, you know, is there a favorite movie or book or something like that around the topic of, of AI that you’d recommend for people or the one that kind of motivated you at some point in your career in this space?
[00:31:59] Dr Kjell: [00:32:00] Oh, well, in terms of one that motivates me and one which I think is still absolutely relevant, but which doesn’t, I don’t think if it says AI anywhere in it, it is Clay Christensen’s innovator solution. It is absolutely spot on in terms of, you know, what the challenges are when it comes to implementing generative AI and how you would use it and, and what, what kind of use cases that you would be and helping you find the use cases that would be, you’re going to more likely to be successful with.
[00:32:26] But again, it was never written around generative AI. It was written, uh, I think in the 2000s, around technology, disruptive technology broadly. So I think that is a wonderful guide to, uh, to leveraging these technologies. Another one, it’s more for the managerial audience, uh, Karim Lakhani and Marco Ian Sidis, uh, competing in the age of AI.
[00:32:48] They’re both Harvard Business School professors, I think it’s by HBR. If you happen to be in the education space, if you happen to be an educator, and this is one of those areas where I think that generative AI is going to be [00:33:00] extraordinarily, extraordinarily impactful. So if there are any of you guys that are out there listening, there’s a book coming out in A month or two called Active Learning with AI by Stephen Kozlin.
[00:33:10] He was the, I think, Dean of Social Studies at Harvard, professor over at Stanford. Incredible, incredible fellow, and he’s written a wonderful guidebook for leveraging generative AI for educators. Awesome.
[00:33:26] Luke: That’s super helpful. Um, I think it’s kind of wrapping up here. Is there anything around Domino Lab or any other info you’d like to share or get a get across to our audience or any kind of last
[00:33:35] Dr Kjell: thoughts you want to put out there?
[00:33:37] Absolutely. Well, if you are looking to operationalize machine learning AI, generative AI, definitely take a look at us. We are probably the most established and largest of the enterprise data science platforms that you probably never heard of. I would also do a plug for the data science leaders podcast, which which I’m the host for.
[00:33:56] We have a lot of incredible guests talking about generative [00:34:00] AI and the challenges of And the trench warfare that leaders have to go through in order to, uh, drive outcomes and be understood by their organizations and get to success there. But also, um, a good number of, uh, thought leaders who can bring a very different perspective to the AI discussion.
[00:34:17] Well said. We’ll be
[00:34:18] Luke: sure to include that too in the description. Um, so folks can go check that out. Yeah, no. And, and, and yeah, I really appreciate you joining us today and, you know, looking forward to getting this one out there for everyone to do. And, and thanks so much for your
[00:34:29] Dr Kjell: time. Thank you very much for having, having me greatly appreciate it.
[00:34:33] Yeah. Yeah. Take care. Have a good one. Thank you. You too.
[00:34:37] Luke: Thanks for listening to the Brave technologist podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the brave browser, you can download it for free today at brave. com and start using brave search, which enables you to search the web privately.
[00:34:52] Brave also shields you from the ads, trackers, and other creepy stuff following you across the [00:35:00] web.