Back to episodes

Episode 3

Creating Data Informed Campuses and Building AI for Higher Education

What impact will Generative AI have on Higher Education? This week’s guests tackle this question head on. Ravi Pendse and Bob Jones lead the development of ITS AI Services, a suite of AI tools that they believe makes the University of Michigan the first university anywhere to provide a custom AI platform for its entire community. In this episode of the Brave Technologist Podcast, they’ll discuss why they’re embracing GenAI (versus trying to ban it) with their students, and share some positive case studies they’ve already collected since their students have adopted U-M’s new AI tools.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Malks, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser and search engine, now powering AI with the Brave Search API.

[00:00:29] You’re listening to a new episode of the Brave Technologist. And this one features two guests from the University of Michigan, Ravi, who is the vice president for IT and chief information officer, along with Bob Jones, who serves as their executive director for emerging technology and support services for IT services.

[00:00:45] Together, they lead development of the ITS AI services, a suite of AI tools that they believe makes the University of Michigan, the first university anywhere to provide a custom AI platform for its entire community. In this episode, we took the time to [00:01:00] understand the motivation and goals behind them investing in and creating University of Michigan’s AI services, why they’re embracing Gen AI versus trying to ban it with their students, and the case studies they’ve already collected, or practical and specific wins their students are having since they’ve adopted their UM AI tools.

[00:01:16] We also spoke about the right to privacy in AI and how they’re creating their solutions with security and personal privacy in mind. All right, Ravi and Bob, welcome to the Brave Technologist podcast. Thanks for joining us. Thank you. Thank you

[00:01:29] Ravi: so much for having us. It’s a privilege and honor to be with you.

[00:01:32] Yeah. Just to give the

[00:01:33] Luke: audience a bit of background, what is your involvement with AI? Like, what are you building and, and kind of like timelines for what you’re building?

[00:01:42] Ravi: No, no, I appreciate the question and I’m going to have Bob jump in here near the end. But really, as we all know, AI is not new. The concept of artificial intelligence was introduced back in 1950s with a famous paper by Alan Turing.

[00:01:58] And then what happened is from [00:02:00] 50s until most recently, the a few. I always compare that to what occurred with internet and World Wide Web. And I may be dating myself when I say this, uh, when internet was originally introduced. Again, it was specially reserved for a few. Only a few people who knew the idea of IP addresses and all those other things could actually use it.

[00:02:21] And then when HTTP was introduced and the World Wide Web happened, all of a sudden, the use of Internet was democratized. Anybody could start using Internet. And then look what we have done with that today. Even this podcast is being recorded on that platform via web. And so we built e commerce on it. We did so many different things.

[00:02:42] So similarly, when I started 1950 again, it was reserved for a few. And then when generated way I came along overnight. Everybody started having conversations about what is this thing called A. I. What does it do? How can I use generative A. I. O. Generative A. I. Has problems. Oh, [00:03:00] I might take us take our jobs away.

[00:03:02] You know, lots of these conversations right? So as a public research university that University of Michigan is is a world renowned university that we have had the privilege to work at. We decided to take these questions head on. So in partnering with our provost, we actually had a Large faculty committee that tried to answer all of these questions.

[00:03:24] Like one of the questions that was battered around was, oh, that’s the end of English classes and English paper writing because now everybody is going to plagiarize. I’m here to tell you most students, especially all Michigan students are not that way. They actually are here to learn. And this I saw it as a teachable moment.

[00:03:42] So we got the groups together. Our committee worked really, really hard. They produced outstanding report, which had advice for our faculty, staff and students, but simultaneously, I challenged Bob, who’s in my leadership team and our team to say, well, it’s not about just doing report. It’s about doing things.

[00:03:59] We, as human [00:04:00] beings, learn by doing. So what can we be doing? What kind of things can we offer our community? But that can actually start using. This thing called generative artificial intelligence, and so that’s where we started to pay attention to it. And we said, what kind of services can we offer our community?

[00:04:16] So I’m gonna bring Bob in here so he may talk briefly about some of the things we’re doing. Bob, go ahead. Yeah,

[00:04:20] guest: I mean, it’s some of the values that U of M are inclusion, equitable, accessible and some things like that. So we really aim to develop a platform that the entire community could use that was accessible.

[00:04:32] And equitable. So we, we created really an entire platform and there’s like three different services built in the platform. And I’ll briefly talk about them. Something that’s very similar to chat GPT that we call um, GPT. Some key differences though. Ours is free right now to the um, community. It’s accessible.

[00:04:49] So true to the sense of the word available to for screen readers and things like that. So it’s really the first of its kind there. Private because we are not taking the data that the questions [00:05:00] that people ask and sharing them with Microsoft or open AI. So it’s really a private environment. And, you know, those are sort of the tenants of what we call you.

[00:05:08] Um, GPT and then really, we have a really powerful capability called that. We call Maisie because of maize and blue for obvious reasons. And Maisie really attempts to take some of the it. The more complicated tenants of these large language models and allow the end users to create contextual experiences without having to know python.

[00:05:30] So, so let’s, you know, if the three of us were to have a cryptocurrency club, we could train and index all of our data in Macy and then we could provision access to just the three of us. And we would type prompts and get answers that are really contextual for our interest. And this is a codeless experience that we’ve created to integrate to many of the University of Michigan data repositories.

[00:05:53] So, and then the third is, is we call UMT Toolkit. And Toolkit is, is simply API access to the [00:06:00] backend environment for which we’ve built. Just to kind of,

[00:06:02] Luke: uh, back it up a little bit. When you guys mentioned community, are you kind of talking about the broader public or, or students or programs at the college or other colleges?

[00:06:13] Can you give us a little color on that?

[00:06:14] Ravi: Happy to. So currently, these tools that Bob just talked about are available to our faculty, staff and students, which altogether total 100, 000 plus. So it’s a very large group. So we refer to them as community. And the other key part that Bob mentioned, which is very important is, as you know, right now, if you want to use open AI, you typically have to pay chat, especially chat GPD 4.

[00:06:35] 0, you have to pay 20 a month. It is available to our community. All of our faculty, staff, students for free. And that’s where the equity issue comes in. Because, you know, at any college, including Michigan, we will have some students who have means we have other students who may not have means and we wanted to make sure that this was an equitable experience for anybody and everybody should be able to use this platform.

[00:06:57] To learn to teach and to [00:07:00] essentially innovate together. And so that’s why we did that. So when we say community, it’s all of the about and then our goals for the future are actually to make some of this available to institutions, perhaps who may not have resources like Michigan does, for example, a smaller community college or some of the HBC use if they feel like we can assist and help them.

[00:07:21] We are absolutely delighted to assist and help in any way we can from the point of view of sharing our knowledge and how we set this up to potentially even making this platform available. Wow, that’s awesome.

[00:07:33] Luke: Would you say kind of making us all more ubiquitous and accessible is kind of one of the big motivations behind creating UM’s AI services?

[00:07:40] Or is there anything else really driving

[00:07:43] Ravi: the motivation? So I think that’s one of those. I mean, when you think about Michigan, it is about making a positive difference and a positive impact in the world. And so the idea is, we feel that University of Michigan, it’s one of our responsibilities to be focused on public good.[00:08:00]

[00:08:00] And that public good involves. In our view, the community of the entire world and so whatever we can do to assist them, obviously there are limitations on how much we can do on day one, but over time, that’s our goal is to support learners, researchers, innovators all over the world so that they can together create.

[00:08:20] Luke: I’d love that such a good counterweight to a lot of the doom and gloom stuff you hear out there about like how this stuff’s going to do more voices we have kind of helping to shine a light on like some of the good things this can be used for super awesome. I think really cool to hear about that. How is UMGPT different from

[00:08:36] Ravi: chat GPT?

[00:08:38] So I think Bob mentioned it or started mentioning it, but really, uh, one of the key aspects is first of all, it is free to use for our community. Second, it supports screen readers, so a person who is visually impaired can actually, for example, use UMGPD. You cannot currently do that with the chat GPD. As you saw third [00:09:00] data that is used with Chad GPD.

[00:09:02] For example, if you ask certain questions, you know, those questions are being shared with open AI. Our instance is private. We take privacy extremely, extremely seriously. I’ve been quoted many times saying that I feel like, you know, right to privacy. In my mind is as fundamental as, for example, right to vote.

[00:09:18] I feel like we should all be focused on privacy as much as we are focused on everything else. And so we take that extremely seriously. So the idea is our community members, when they leverage this platform. The queries they ask, the questions they ask, the information they get, that’s all private and is protected with appropriate security in place.

[00:09:36] So those are the three things I can think of. Plus, ChatGPT has certain limits in terms of how many queries you can ask in, say, a given hour. Our limits are far more generous than what ChatGPT does currently.

[00:09:48] Luke: That’s awesome. And I love love here in the privacy angle. Obviously, we care a lot about privacy here at Brave.

[00:09:53] So it’s so cool to hear people really putting at the forefront of what you guys are doing, because I think it’s one of those things where [00:10:00] just the nature of these is so conversational that people just forget about what they’re inputting, right? And so it’s something that’s going to require a bit of, you know, handholding with people as they get more deeply, but also like, you know, protections too.

[00:10:13] Ravi: So if I might mention on that privacy front, one thing we did almost a year and a half ago is the University of Michigan, we released a student privacy data dashboard called Visiblew, so V I Z I B L U E, Visiblew, where what it did was it allows our students who are in our community, when they log in, a dashboard tells them What data we collect on them, why, how long that data is kept, and also a lot of data literacy type articles and help in learning things about it, because we believe that students coming out of Michigan should, of course, be data literate and be really aware and sensitive to privacy aspects.

[00:10:48] And similarly, this fall, later this fall, we are releasing a similar data dashboard for our faculty and staff. The idea is we want to share with them very transparently. What did we collect? Because it’s part of business, right? When you swipe [00:11:00] your card in, you’re entering a building. We know you entered the building.

[00:11:03] If you’re connected to a wireless access point here and walked across campus as you switch access points, we kind of know you’re walking across, right? Now, do we keep all of that information? No. Do we follow people around? Absolutely not. But that information is collected and at appropriate time it is then deleted.

[00:11:19] But we want our community to be aware. So not quite related to our current topic. Okay. But related to privacy. So I thought I would mention that to you. No, yeah, yeah,

[00:11:27] Luke: it’s amazing. I mean, some of these things we started to see, you know, more adoption on the European side, but it’s good to see people leading, especially from university level, right?

[00:11:36] Like with with these types of things, if you can get that going now, when they’re still kind of learning, it can carry that with them throughout their careers, right? Like I would

[00:11:43] Ravi: imagine. Plus, many for students end up becoming policymakers as well, right? Exactly. Make us with knowledge of well. Whether it’s crypto, whether it’s AI, whether it is privacy, we need more policymakers who understand all of these aspects so that we can have [00:12:00] appropriate, sensible policy and legislation, hopefully coming from Washington and other parts of the country, right?

[00:12:05] Amen to that. And so, so that’s why we are very focused on

[00:12:09] Luke: that. Yeah, no, that’s amazing. Where do you go with the, uh, with your general AI platform from here, do you guys

[00:12:15] Bob: think? So that’s a great question. And we’re ahead of the game and we’re not going to rest on our laurels. So, so really You know, we have a shared vision of having a data informed campus where contextual data that you have appropriate access to is available in natural language at your fingertips.

[00:12:33] guest: And we’re really, you know, with the platform that we’ve built, we have the means to get there. So, no matter what your role is, or you could be doing research and teaching and learning and even some administration. That role would give you a lot of different accesses contextually based on, you know, your needs, right?

[00:12:50] Your needs around research and teaching and learning and with the tech that we’re building, you know, accessing, you know, right now we have integrations to Google and Dropbox [00:13:00] and canvas, which is our learning management system. But why not sequel databases and other systems where we can use natural language and provision appropriate answers based on your access.

[00:13:10] So you can imagine that you have them Yeah. Can be the first higher ed institution where using a prompt, you can access almost any information that you need at the same place. That’s a pretty big opportunity in front of us

[00:13:26] Ravi: and what Bobby is talking about is early on my career here, working with Bob and leaders across the institution.

[00:13:31] We set a data vision for ourselves for our campus that we would be a data informed campus, not a data driven campus. And the distinction I make there is I want human being to be always in the loop. Thank you. So the human in the loop concept is important and hence data informed because if it’s just data driven, we could actually program a machine to just make the decisions.

[00:13:50] That’s not the idea. The idea is to have a human being in the loop. And the idea there also is that depending on your role, you should be able to, in a way. Simple box like a [00:14:00] text box. Type your favorite data questions and systems in the back should be able to collaborate, cooperate with each other and produce the answer.

[00:14:07] That’s very powerful as opposed to having to read bunch of dashboards and charts and graphs because human beings intuitively think different things. And as you’re interacting, you get new ideas and you get new things to think about and you innovate like that by question and answer. Right? And so this would be a way for us, our leaders, our community On their basis of the role to be able to access that data.

[00:14:29] Now there’s a lot more into it. You know the security of that data because if you’re not authorized to. Access that data. You may ask the question, but you’re not going to get the answer because you don’t have the authorization. So we take security also very, very seriously along with privacy. So around that, how do you build that now?

[00:14:45] What this model that we’re talking about the general model, it makes it so much easier. Another application that currently is being done by many of our faculty colleagues and has significant ramifications to mental health and wellness because, as you know, Yeah. That [00:15:00] is quite a big challenge across all institutions of higher education, the mental health and wellness of our students as we speak.

[00:15:06] And sometimes it happens because of stress as you go through difficult exams. And by the way, Michigan education the top in the world. It’s not easy. You have to work hard. And that it comes with stresses. And so having, for example, an AI tutor. A coach, a copilot working with you throughout all of your classes.

[00:15:24] So we have faculty members who are actually leveraging Maisy that Bob was talking about to build AI tutors for their class. So at one in the morning, if you’re struggling with that calculus problem or that thermodynamics equation that you’re struggling with or whatever the topic may be, you may have a AI bot for your class That can work with you and give you hints and assist, or you may say, Hey, about can you suggest some sample exam questions?

[00:15:49] As long as professor has authorized it, it will actually generate some sample exam questions for you to practice. And imagine if you had all these tools, how much better the education and learning would be. Because what’s the [00:16:00] goal of exams after all? It’s about assessing if you have learned something, right?

[00:16:04] And if you solve 50 different problems, there’s nothing you’re not going to know. And that’s the idea.

[00:16:11] Luke: And I can imagine too, can it help inform in other ways, like, um, get up and go take a walk or whatever, you know, you know, walk away from this or that or whatever, but you know, it’s one of those things I absolutely love about this technology is it’s, you know, it can be really helpful for people both in what they’re doing with work, but also in learning, right?

[00:16:27] Like, and it’s so cool that you guys are kind of leading the way with these approaches. Thoughtful approaches. I think that’s what I’m getting from you guys is just hearing like data driven or data informed, right? Like care and thought is going into this and it’s not just kind of bandwagoning and shooting first and aiming later or whatever.

[00:16:41] You know, like it’s important. It’s really

[00:16:44] Ravi: important. It’s a great point. You bring up Luke. It’s about being thoughtful. So everything we have done and this is coming directly because of our faculty colleagues and others. All of us have been very thoughtful about approach, how we approach this, because as you know, sometimes some of the answers that you get from some of these [00:17:00] technologies like chat, GPD and others could be incorrect and that’s called hallucinations, right?

[00:17:05] So there is conversation going on around AI literacy, just like we had, we talk about internet literacy. It just because Google or DuckDuckGo gives you an answer doesn’t mean it’s the right answer. So you have to have a feel for the answer. You have to be able to check your sources, compare, have discussions.

[00:17:20] Similarly, we encourage our students that just because chagrin spits something out doesn’t mean it is right. So how do you think about this? How do you think about that answer? How do you think about your pedagogy if you’re a professor? How do you change the way you teach? In the world of now chat, G B D, the idea is this is a teachable movement.

[00:17:39] This is a teachable tool. How do you leverage it to augment and enhance humanity? No, that’s awesome.

[00:17:46] Luke: What kind of impact do you think generated AI can have on higher education?

[00:17:50] guest: I think higher education does a lot of things really well, and, and some of the things that, that it does is, you know, in this case, We didn’t invent what is now, like, the [00:18:00] Kleenex of the large language model world, and that’s ChatGPT, right?

[00:18:04] But what we can do is really understand its impact. You know, that’s a focus of Higher Ed. Understanding the impact of those not well represented. Understanding the impact of its biases. And I think, you know, the exploration in appropriate uses where universities Excel. So I think the universities and specifically U of M, we have an opportunity to lead the world and understanding appropriate usage around large language models.

[00:18:29] And in this case, we’re talking about, you know, chat GPT, but but there’s a lot of other places to assess the impact of these models on the rest of the world. I think that’s nothing but opportunity for the University of Michigan.

[00:18:41] Ravi: I think very well said. I may only add that, you know, for example, Hey. When calculators came along again, I’m dating myself, but when they came along, there’s this fear among.

[00:18:51] Oh, my God, that’s the end of mathematics as we know. I mean, who’s gonna now do this and do that? So similarly, there’s fear around chat GPT and universities by [00:19:00] using it thoughtfully by using it responsibly by using keeping a literacy in the air literacy in mind. We can show the world how to adopt how to leverage how to use this technology the right way.

[00:19:12] And create future A. I. Engineers who are thoughtful and who are leading the world and our leaders all across. You know, at Michigan, we talk about leaders and best. That’s one of our taglines And I really believe that Jenny I and with what Michigan is doing will allow us to not only be leaders and best, but really show the world How to work together and work with each other so that overall humanity is enhanced, our world is enhanced, that there’s positive impact around the world.

[00:19:44] So whether it’s climate change problem that you may be trying to access, address the distrust that exists in our own country right now, among certain sections of our populations, all thoughtfully

[00:19:57] if people come together. And this [00:20:00] is a, this is an amazing technology tool. That will allow for those learning opportunities and for those teaching opportunities

[00:20:06] guest: to harken back to something that Robbie said that I think is really, really important is that we need policymakers that understand this tech.

[00:20:14] And I think, you know, the university doing what it can to expose great use and inappropriate use in places where it’s biased and informing our policymakers or policymakers coming from U of M will allow. You know, our country to make better decisions about what it means that, you know, you could google gen AI and you’re going to get like things about it being sentient or countries banning it and it’s like all this crazy news that it’s really hard to calibrate if you’re sort of flying high and you’re not paying a lot of attention.

[00:20:43] It just can be scary. Uh, it shouldn’t be scary. It should be understood and there’s a lot of advantages, but you know, really being clear about what they are and how we should use them is going to be a really important. Like we’ve been talking about LLMs a lot, but there’s other powerful Gen AIs. I’m sure that a clever [00:21:00] person could take this podcast where Ravi and I are sitting next to each other and make us two different people, right?

[00:21:05] And we need to understand what that means in a world where, you know, as Ravi eloquently talked about, we’re. You know, there’s a lot of information out there and there’s this idea of what is the truth and what is a lie? Well, we’re headed to a place where that’s going to be very difficult because of evidence in front of your eyes, right?

[00:21:22] Like the, you know, this conversation here could be Darth Vader, Luke Skywalker and Han Solo instead of our voices. And that can be done pretty easily. So I think provocative questions like we’re going to be faced with not proving something’s wrong, but proving that it’s true. Right. Those are sorts of things that we should be thinking about around gen AI.

[00:21:40] And then I actually think that’s where, you know, these conversations even intersect with Web three, right? So like the blockchain, the blockchain is a is a perfect ledger and perhaps is a tech that should be used to prove. You know, this video is true, and it’s based on, you know, the blockchain and this transaction, you know, with the brave [00:22:00] technology or something like that, but it’s, these are big questions that are going to face the world here, and we need to be prepared to

[00:22:05] Ravi: answer them.

[00:22:06] You know, we have talked about the concept of truth engineering, right? So there could be a degree in truth engineering where a person is able to leverage all of these tools and to be able to tell right from wrong, true and false, and so on, right? Because as people are using deep fakes and some of these other challenging things, we shouldn’t be scared about it.

[00:22:25] We should take it head on and approach it. And that’s what leaders do. They take things head on. They don’t run away from it. They try to understand it, you know, because fear mongering is always going to be there. And absence of knowledge is what causes fear. And so if you’re knowledgeable about it, if you’re ready to learn about it, then fear, you know, has to take a backseat in that case.

[00:22:46] Luke: And you’re not really going to learn the practical use and how people are applying these things if there’s a moratorium on people actually using them, right? It’s

[00:22:53] guest: literally the worst decision to be made, is to ban, right? We need to understand, and again I keep coming back to, that’s what [00:23:00] universities are really good at.

[00:23:01] Yeah, and

[00:23:02] Ravi: the other part is, let’s assume for argument’s sake that we decide, you know what, we’re going to ban this all here. The rest of the world is not banning it. Okay. They’re running away from it. I mean, they’re, they’re really, you know, running ahead. And so if we run really, really fast, we will still be standing still.

[00:23:18] Yeah. Right. So we really need to be flying, not running fast. And we can do that, uh, with the power of, uh, generative AI with the help of institutions like you and others to really spread the word that we need to really fly with this thing. Because if we just run very fast, we’ll be really standing still.

[00:23:39] Luke: Absolutely. And I think you guys have a really strong point about informing policymakers, too, because, you know, they only know what their staff can find for them, right? Like, and, you know, the more you guys are out there, the more and more people can actually see the good things this can be used for, even just knowing what the problems are.

[00:23:54] Absolutely. As this gets adoption. Cause like, you’ve never seen this type of like, you know, adoption with this [00:24:00] before at the scale that it’s kind of coming in. And especially now with all this hype around it too. It’s just like, I think you kind of get past this parlor trick phase and now people are actually starting to build the tooling and starting to kind of integrate it into a lot of things.

[00:24:12] And you guys have this really awesome position of being able to study these things in a different kind of a context than a commercial one. Right where the motivations are different and people are trying to hit that quarterly thing. Right. And so I think it’s really excellent. I mean, you’ve been talking about kind of a bunch of different things here around what AI could do and, and, and what you guys are studying on it.

[00:24:31] What, what’s been the most impactful personal or professional example of AI to improve your life or work or top AI hack that you all have seen? And we can split it up into two. Yeah. Or however you guys want to

[00:24:42] guest: answer it. Oh, you mean besides the robot that I made that writes emails in his voice, so you’re not

[00:24:50] Ravi: talking to I actually write to my staff every Wednesday.

[00:24:55] I write to all of them and I take great pride because it’s in my own personal voice. [00:25:00] So Bob actually took several hundred of my previous emails. And train the robot essentially to write it and when he shared it with me, I actually showed it to my wife and she jokingly says, said to me, and it was really, really well done.

[00:25:13] I still don’t use it, but it was well done and I, she said to me, I like this other guy better. That’s what she said. I told, I told Bob, I said, thank God it was a robot. Otherwise she might send me packing. So by the way, we 35 years. So yeah, yeah, yeah, yeah, yeah, totally. Friend. And so Bob knows that as well.

[00:25:32] So. But no, I mean, go ahead. Something more concrete than that.

[00:25:36] guest: You know, I’m being honest here. I’ve increased my personal productivity by two X. It’s just, I use. L. L. M. S. Every day and not just L. L. M. S. I use other gen a I and reach outside of areas that may have been within my area of expertise. So if I need to launch a project or a program, it’s gotten really easy for me to feed key points and to build the framework for a program.

[00:25:59] Now, [00:26:00] something that’s really important for everybody to hear, and especially as we preach positive use and appropriate use of Jenny eyes that always is human. Gen AI human, right? You would never rely on the output without checking it and say none. But, you know, and I think it speaks to, you know, some of these doom and gloom articles about how X amount of jobs are going away and blah, blah, blah.

[00:26:22] Jobs will change, but you’re never going to exist without humans. The humans are the soul of what we’re trying to accomplish here. So, so just in terms of personal productivity, it’s greatly increased. What I do on a daily basis. You know, I will take the moment to talk about a student that this touched my heart when they were talking about the advantages of our Gen AI platform.

[00:26:45] And one thing that the student said was that there’s some anxiety and discomfort raising your hand in our lecture class to ask a question, right? For fear of being looked down on, like, you’re not smart enough or you might know something. Whereas, you know, in this class, one of [00:27:00] the faculty members. You know, has created tutors and, you know, she’s very comfortable asking the tutor questions based on the context of the information being presented.

[00:27:09] And I thought that was very touching because it hits on that student mental health component that Robbie talked

[00:27:14] Ravi: about earlier and candidly, I can relate to that comment that was made because when Bob shared that there are tears in my eyes, I’ll tell you why I came to this country as an international student.

[00:27:24] I did not speak English as well as maybe I do today. I’m still work in progress on that front as well, but do better today than I did many, many years ago. And the fear there was I was extremely shy as well. I was very shy. I really didn’t want to ask questions in the class because I felt like, well, what if I ask the wrong question?

[00:27:41] Will everybody think I’m stupid? I mean, there are these, Hey, Concerns and fears when you’re new to a country away from your family. And I went through that as an international student. And I wish I had a tool like an AI tutor back then. And while I was lucky enough to do well in my classes, but I could have done even better.

[00:27:57] These kinds of tools are really accessible and [00:28:00] available to people. The other areas that I’ve seen this happen is, you know, PhD student, amazing writer, because he grew up in a family where. Writing, reading, and all those things were commonplace, but not everybody is so blessed. And so there are kids who may be brilliant, but maybe their writing skills need just a little bit of nudge.

[00:28:21] And these kids, when they’re writing essays for college admissions, for law schools, and other places, it is important that they have that little nudge and advantage. Not that they are going to plagiarize anything, but if it’s going to help them make the writing better. Then why not? Why not? We should. I mean, again, human in the loop.

[00:28:37] So it’s not like you’re cutting and pasting something and providing it as your own. But if it’s going to help you make your writing better than we should be considering and leveraging these type of tools. And again, key is thoughtfully, he thoughtfully. So idea is not to fear, but to embrace. The idea is to leverage it, not completely replace that human being.

[00:28:57] Luke: Yeah. And I think like, uh, this is really [00:29:00] awesome story too. And I think it’s one of those things where, you know, people talk about breakthroughs. It’s, they tend to think of them, you know, in a certain context, but those little things that, that could be holding somebody back or getting that answer that they’re not able to get just because they just can’t get the question out.

[00:29:15] You never know what that kind of thing can unlock, like, and, and, and how this could kind of have this exponential effect when everybody has access to these things. It’s underrated, like how much these little things can, can lead to big breakthroughs and change somebody’s experience in life in general from these things.

[00:29:30] So I think that’s an awesome story. And, and I think we need to hear more of those from people as the technology gets in more and more hands, but productivity aspect too, is something we hear a lot, like from programmers and people that are in this space. And it’s kind of like, you know, okay, uh, you go from having, you know, the plow to the tractor.

[00:29:47] Right. Like, and it’s like, dude, like we’re still have farms. Like, yeah, that’s not going away. Like now, you know, we can just advance everything else. You know, it’s a no brainer kind of thing, but it gets clouded with a lot of this hyperbole

[00:29:57] Ravi: that’s out there. May I share with you another related story? Yeah, yeah, yeah.

[00:29:59] We [00:30:00] had a faculty colleague who is writing a book. And she was looking for some of her own citations or citations for work. She tried to use the traditional way of, you know, like going to the library and we have amazing libraries here, by the way, this is not anything to do about libraries, but then she was simply running out of time.

[00:30:15] And so she decided to give UMGPT a try. And when she did, to find her own citation, her exact code to me that she shared with me was she was able to find the information she was looking for in less than 10 seconds. Wow. And as a result, what could have taken her many days, she was able to get the information in 10 seconds and then use her time more productively to develop her book further, to spend more time with the students that she loves, and do other productive things to support our community.

[00:30:41] Yeah, and I think

[00:30:42] Luke: those things all the time, right?

[00:30:49] And people will get kind of burnt out, and these things can actually like help alleviate some of that. And I know that we’ve been talking kind of about doom and gloom and stuff, but like, bringing it back to the practical elements, like, what [00:31:00] things do kind of concern you, or what are you a little bit worried about being as immersed into the AI field as you are with what you’re seeing?

[00:31:07] Like, what are, what are concerns that are kind of top of mind, I guess? I think

[00:31:11] Ravi: they’re not as much as concerns as something that we need to solve together, you know, there is still a general level of distrust among people, not necessarily at Michigan, but generally speaking, people still worry about this technology, like, is it going to take my job away?

[00:31:26] And so, so we all have a work to do there to explain to people, chances are somebody who knows this technology is going to take your job away, not the technology, but somebody who knows this technology. So it’s important that you learn. I’m an engineer by profession, so continuous learning comes naturally to me because, you know, the field doesn’t stop.

[00:31:45] And frankly, human ingenuity doesn’t stop. We are curious people. So we want to learn new things. And so we have to kind of think about that. So that’s one concern that people always have. The other concern that people have that Bob alluded to a few minutes ago is the idea of bias. Bias that’s built into this [00:32:00] models, you know, and we talk a lot about diversity, equity, and inclusion.

[00:32:03] That’s one of our core values at University of Michigan. Essentially, when the large language models were built, They were built leveraging all of the information that’s out there on the internet. And a lot of the information there is inaccurate. A lot of the information that is biased. So the concern is, the answers that ChatGPT is producing, and when I say ChatGPT, just as an example, there are other language models that do the exact same thing, could they be biased answers?

[00:32:28] And if they are, and then you publish that information back again, are you propagating this bias across? And that’s where we need to have these thoughtful conversations and question this. Question this, learn, address it, fix it. So we have faculty colleagues when Bob talked about our tier 3 and the AI, we have faculty colleagues who are trying to develop their own language models.

[00:32:49] Notice I didn’t say large language because you can have a much smaller language model if you have a vetted data source. If you have a data source that you know it to be accurate. That is, you’re not [00:33:00] essentially scraping all of internet, but leveraging a subset of data that you have verified to be accurate.

[00:33:05] Then you can actually get the same accurate results with a smaller language model, and you can verify that it avoids bias. So that bias is a concern around this. And then we already talked about hallucinations. So all of these are concerning to people. And like I said, it’s the fear of the unknown. So we need to be talking about it.

[00:33:23] We need to be not get defensive about it. Not everybody is going to necessarily. Appreciate everything about AI today, but that’s okay. We are here to have that dialogue. We are here to learn from each other, and maybe they’ll teach us a thing or two, and that’s completely okay, right? We have to be open to it.

[00:33:39] And so having that dialogue, having that dialogue community wide, not just Michigan, but universities to universities. So there is that fear, right? Fear of the unknown, and that we need to still address and advocate for, but maybe Bob has more things to add.

[00:33:52] guest: Those are all great things. You know, one thing I would add to that is that I believe, but we’re here to leave the world better than we found it.

[00:33:58] And I think as [00:34:00] new and powerful capabilities come along, we need to be careful that it’s not in the hands of those with means, and that’s kind of, you know, one of the things that drove our development, the platform, we are going to propagate a world that we won’t be proud of if, you know, I talked about enhancing my productivity to X.

[00:34:17] Well, for somebody that isn’t at U of M that can’t afford the 20 bucks a month. That’s a problem, right? This is similar to, you know, people without Internet. That’s a problem. Power and access can’t be restricted to those with means. And when I think globally, that’s the thing I’m probably most concerned around with with Gen AI and AI and these powerful tools is they need to be accessible to everybody.

[00:34:42] Luke: Yeah,

[00:34:43] Ravi: that’s a great, great point. And that’s the mission. We are on that. That’s what we would like to do eventually partnering with our government partners, other corporations and others asking the questions. How can we make this available to wide spectrum of colleagues all across the world?

[00:34:58] Luke: Yeah, that’s such a great point.

[00:34:59] I mean, you know, cause [00:35:00] I work with people who maybe were self taught, right? Like, and, and that’s one of the, one of the beautiful things about the internet is just that, you know, if there’s something you want to learn, you can learn and do it, your own drive can kind of, you know, help you get to that stage.

[00:35:12] And AI is just at that same level, right? Like, and making sure that it’s got, it’s accessible and, and localized and all of these, like, these are hard problems, you know, like that I have to get solved, but I’m really glad that you guys are. You know, you’ve got folks working on them. It’s great to hear, you know, speaking on that note, you know, we kind of talked about why accessibility and we talked a little bit about privacy, but.

[00:35:33] Do you feel that these things are addressed seriously enough in this space right now? What would you stress is most important about these two elements with the business community or with a broader community outside of, you know, the academic community, what points would you want to stress to these communities about why accessibility and privacy

[00:35:50] Ravi: are important?

[00:35:51] I tend to be an optimist. I tend to believe that people at their core are good people and want to do the right thing. So it’s the responsibility of an educational institution [00:36:00] like Michigan to put a mirror in front of people to say, listen, we need to be thinking about equity. We need to be thinking about accessibility.

[00:36:06] We need to be asking the question as you roll out these technologies, how are you making sure that they’re available to more than just a few? And that’s the role we can play as an institution of higher education, putting a million of people and reminding them that we collectively have a responsibility.

[00:36:23] Responsibility to educate. Responsibility to make things available. Responsibility to teach and learn. And then, only then, together, we’ll be able to enhance this beautiful world we have an opportunity to live in right now. Cool.

[00:36:34] Luke: Yeah, I think, kind of, wrap it up on something light. A lot of our users might be technical savvy people or people that have no developing in technology, but aren’t as familiar with, like, getting into AI and things like that.

[00:36:46] Obviously, you guys have a program, but, like, are there resources or tools or people that you recommend That somebody that’s new in this space and look into a little bit more closely, or,

[00:36:56] guest: we had a cohort early on. When was the age gen [00:37:00] AI cohort? February, March. Right. In that timeframe. But it was right, right when the news was particularly terrible at talking about gen ai and I don’t know that the news has changed.

[00:37:09] I, I typically don’t Google it to find out. The news agencies are thinking about Gen AI, but it was certainly right around when, you know, it was being banned and it was sentient. And again, all these crazy ideas. And we brought about 50 people together from our organization who had some interest in Gen AI, and we promoted it by this cohort is for you if, and it was interested in some other principles we used.

[00:37:34] And one of them that I think is really, really important is just need to be willing to roll up your sleeves and dig past the noise. Thank you. To get to actual signal, you know, there are many, many different ways to do that. You know, certainly. You know, YouTube and going on X and following good founders.

[00:37:49] Like, so you’re asking about specific people. I’ve always found Alexis O’Hanion to be great and interesting. And, you know, I think he’s a good founder in Paris and Chase who developed [00:38:00] LangChain is a really excellent person to follow and really landing on people that. Are promoting the positive engagement.

[00:38:07] Here’s how you can roll up your sleeves and really begin to understand what this tech is. And generally for me, my personal learning style has been that it’s get past the fear and begin to roll up your sleeves and figure it out yourself. And then you’ll quickly realize this isn’t scary. It’s wonderful opportunity.

[00:38:25] And we need to learn more and communicate differently about how we’re talking about it.

[00:38:29] Ravi: Very well said. I would just like to add to that. I wouldn’t be doing my job as a professor if I couldn’t recommend one book. So one of my favorite books in this topic is by Kai Fu Lee. Kai Fu Lee and I think Chen, they wrote a book called AI 2041.

[00:38:46] And 10 visions for the future. So AI 2041 10 visions for the future. And what’s interesting about that book is even though it’s a book on AI, they try to write it as a novel, and I frankly think they succeeded. So frankly, anybody can [00:39:00] pick up that book and read it. We may or may not agree with everything that’s described in that novel of that book slash novel because they take use cases.

[00:39:07] They take use cases in different countries and how the world might look like in 2041 ruled by AI. Yeah. Managed by AI. So there is some fear there. But overall, I thought it was a pretty good book. And I, you know, it was interesting read and one could read it in a couple of sittings. So that’s one book I would recommend in terms of other folks to look into.

[00:39:27] I do follow a podcast put together by Brad Smith. Who is at Microsoft, and he brings in some really, really interesting speakers on his podcast. They talk about AI, but they also talk about privacy and many other topics that are dear to my heart. So that’s, that’s another one that I recommend. And frankly, I would be remiss if I didn’t put a plug in for University of Michigan.

[00:39:46] Look us up on the web. There’s a lot of good stuff that we’re putting out that people could potentially learn from. And I’m sure as we interact with them, we will learn from that.

[00:39:54] Luke: That’s excellent. And we’ll be sure to include links to that too, in the description so people can check you all out. [00:40:00] Is there anything else you want to kind of close on or anything that you want to let people know about that we maybe didn’t cover in the questions here before we sign off?

[00:40:07] Ravi: I think the only comment I would make is that as we think about generative AI, we need to approach this technology with our eyes wide open and feet on the ground and really be willing to learn and not fear. So, and that’s how we’re going to be able to enhance. And the other part I want to make, emphasize, and I think I said it once before, the AI is never, in my opinion, never going to replace human ingenuity.

[00:40:29] It’s going to augment it. Excellent. I

[00:40:32] Luke: really appreciate you guys for coming on. And, you know, I’d love to have you all back too to kind of check in on how the program’s going and, and, you know, uh, highlight anything you all are doing that, that you want more people to know about. Door’s always open. No, we appreciate that.

[00:40:43] And, uh, I really appreciate you all joining us today and, and telling us all

[00:40:47] Ravi: about what you’re doing. Thank you. And if you, if you send us your address, we’ll make sure we get these shirts.

[00:40:53] Luke: Excellent. We’ll do it. I’ll return

[00:40:55] guest: the payment. I am interested in some Brave swag, though. I am a Brave

[00:40:58] Luke: user, so. We will, we will definitely hook [00:41:00] that up.

[00:41:01] Thanks, Luke. Take care. Thanks, guys. Have a good one. Thanks for listening to the Brave Technologist podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com. And start using Brave Search, which enables you to search the web privately.

[00:41:20] Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • Empowering the next generation of policymakers (students!)
  • Making AI accessible to everyone, and the importance of removing any barriers to its power
  • The right to privacy in AI, and how they’re creating solutions with security and personal privacy in mind
  • How to reduce fear mongering around AI and emerging technologies
  • Creating a “data informed” campus where humans are always in the loop

Guest List

The amazing cast and crew:

  • Ravi Pendse and Bob Jones - Information Technology and Chief Information Officer and Executive Director of Support Services

    Ravi Pendse is the Vice President for Information Technology and Chief Information Officer at the University of Michigan. Bob Jones is the Executive Director of Support Services at University of Michigan. Together, they lead the development of ITS AI Services, a suite of AI tools that they believe makes the University of Michigan the first university anywhere to provide a custom AI platform for its entire community.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.