Back to episodes

Episode 65

Preparing the Next Generation for AI

John K Thompson, Global AI Leader at EY, addresses pressing questions about education, regulation, and society’s AI readiness. He challenges the traditional educational divide between technical and creative disciplines among current students, and shares why the biggest risk with AI tools is giving powerful technology to people without training in structured thinking.

Transcript

Luke: [00:00:00] From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist Podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Moltz, VP of business operations at Brave Software, makers of the privacy respecting brave browser and search engine.

Now powering AI with the Brave search. API. You’re listening to a new episode of The Brave Technologist, and this one features John K. Thompson, who is an international technology executive with over 38 years of experience in the fields of data, advanced analytics and artificial intelligence. He’s currently responsible for the global AI function at ey.

His role is to actively lead the design, development, implementation, and use of innovative AI solutions, including generative ai, traditional classical ai, and causal AI across all of EY service lines and functions for EYs [00:01:00] clients. In this episode, we discuss risk associated with generative AI and the importance of education and preparing the next generation for ai.

The necessity of regulation in the industry and the holistic view needed to understand the interplay between data and analytics, and the role of EY in helping companies navigate AI implementation and compliance. Now for this week’s episode of The Brave Technologist, John, welcome to the Brave Technologist.

How are you doing today?

John: Good, Luke, how are

Luke: you today? Great, great. Can you give us a little bit of background about your journey to, to what, what you’re currently doing and anything in your background, you know, that well positioned you to get where you’re at now?

John: Yeah, it’s, it’s interesting. I was, I just started teaching at the University of Michigan and, you know, the students are always interested in how did you get here?

How did this all begin? And you know, if you look at the bottom of my LinkedIn profile, you know, I have pre-college employment, which I put on there kind of as a lark. [00:02:00] And I was an auto mechanic, and I worked in machine shops. And I even dug ditches, you know, or dug graves for a while. And, oh wow. You know, they looked at it and said, how’d you end up here?

And it, it was, you know, it was just one of those things, it was just a journey. I, I was never gonna go to college. I ended up going to college on just a whim. You know, the school I went to in Michigan had just started a four year computer science program. I had never heard of computers, so I tried that. I got outta school and I was building systems and writing code in Assembler, and then someone said, Hey, we need to analyze some data.

So I did that and it just. Took off from there. So, you know, it’s one of those things that I’ve always been interested in data and analyzing data. And when I, when I first learned about ai, you know, 35 years ago, I just thought, well, that sounds really cool. That sounds like something fun to do. So it’s been a random walk, I guess is the way to say it.

Luke: That’s awesome though. I love that. You know, it’s so great too, like how, you know, just an openness for the journey, right? And kind of like [00:03:00] going where, where the road takes you, you know, in a way. That’s cool. I, and you, you mentioned too 35 years of, of kind of learning about ai right. In this time, what, what are some of the most significant advancements in the AI technology over the past couple of years?

Where do you see the current risks given the amount of noise out there around fear and things like that, that people have? Well, there’s a lot of noise, that’s for

John: sure. It’s it’s been wild. My current day job is I’m the global head of AI for ey. And I started that job just over two years ago, just as the gen AI wave started to crash over the world.

And that was, that was interesting. So, you know, most of my background has been traditional data science, foundational, what I call foundational ai, predictive ai. In generative ai. You know, we, we’ve been playing around with natural language processing, natural language generation, natural language understanding models for about six years, you know, about four years before the whole generative AI wave exploded.

[00:04:00] And we were looking at it saying, oh, it’s interesting and it’s, it’s cool, and we are gonna be able to move beyond just tabular data and structured data. And then, you know, the, the announcement of chat GPT just took the world by storm. So it’s cool technology, it’s fun to work with. It certainly opens up the horizon.

You know, I thought about it long and hard before LLMs. We were working with about 10% of the world’s information, structured information, numbers, sales and visits and things like that. Now we’re working with about 90% of the world’s information. Wow. So that really opens the aperture. You know, you go from 10%, which you were downsampling to probably 1%.

Now you’re up around 95 or 98%. So just by the nature of that cha step change and data that’s available, things have to change. Yeah. You just can’t keep doing what you were doing. I think we’re good right now as far as, you know, we’re all learning about LLMs and, you know, large language models and [00:05:00] small language models and domain language models and, and all those kind of things that we’re using for generative ai.

I feel pretty confident that we’re getting a good handle on that.

Mm-hmm.

John: We understand how to build some applications and, and have hundreds of thousands of people hit it and keep it safe and secure. Now we’re moving into agents, which is a whole nother world. You know, an agent, AI agent, a gen AI agent, can do pretty much anything a human can do.

They can bind you the contracts, they can spend money, they can, you know, do all sorts of stuff. So up to now, I don’t think the risk have really been that. Severe or that concerning, at least for me personally. Mm-hmm. And for the applications I build, I think they’re all pretty well controlled agents are gonna be a whole nother thing.

Luke: Yeah. Where do you see this potentially kind of getting off the rails? I.

John: Yeah. You know, we’re, we’re working hard on, on making sure that we understand, you know, where we can take agents. We built skills before, which were basic, prompt and response kind of [00:06:00] things, and those were mm-hmm. Simple to control agents themselves while when you first think about ’em, they can be kind of frightening because they can, like I said, they can do everything a person can do.

But the great thing about it is if you think about it, think of an employee or think of a role you have in your organization and you know those people are governed by rules, regulations, policy, training systems, you know, all those different kind of things. You can do the same thing with an agent. Mm-hmm.

You can put all that surround or governance or control. In the agent, either through a rag environment or ingesting it into the, to the model as context and the model theoretically will probably be more compliant than employees would be, because models don’t forget, people could forget what they’ve been told to do and not do and the training that they were given and those kind of things.

So it’s one of those things that, one of, I think one of the most dangerous things about LLMs. [00:07:00] And generative AI in general is that we’re giving powerful tools to people who are not, have not been trained to be structured thinkers. Mm-hmm. We, as developers and technologists and you know, people that have been involved in analytics and data scientists, we’re all trained to be pretty structured in our thinking.

We all think about the downsides, the upsides, where this could get out of the box, where it could go sideways. But generally people are not, business analysts really don’t, yeah. I mean, they’re a little bit more structured in their thinking, but people who are clerks or administrators or you know, salespeople or marketing, they’re really not trained to be structured thinkers.

Mm-hmm. So, I think one of the most dangerous things is that people with, and it sounds bad, but sloppy thinking mm-hmm. Are actually using natural language as a programming tool. To stimulate these models and get these models to do things for [00:08:00] them. And I think that’s a, that’s a problem. Mm-hmm. People need to be trained on what prompting is and trained on the controls and governance that should be in your prompts to get these models to do what you want them to do and to not have them do unfortunate things that you don’t want them to do.

Yeah. So I think that’s, I think that’s one of the biggest widespread risks that we face right now.

Luke: Just like kind of, learning about what some of the second order issues or, or some of the you of, of what’s, you know, oh, you might do this and it might sound great, but you might end up accidentally like false positive, a bunch of people out of like home loans or something, right?

Yeah, yeah, yeah. Exactly. Exactly. Yeah. I mean,

John: and you know, you’ve touched on something very important there. Generative AI is a really cool tool. It’s a great technology to use and we’ve used it for, for many, many wonderful things, but. It’s not good for some applications, like you said, you know, evaluating home loans and things like that, that’s not, you know, you shouldn’t be green lighting or red lighting or stopping [00:09:00] or starting or approving or denying, you know, loans.

With this kind of technology, you shouldn’t be. I. Making decisions on healthcare and surgical decisions. Should I remove this person’s spleen or should I remove the left lobe of their lung, or I mean, or the bottom lung lobe of their lung? Th this is not, these are not things you should be doing with large language models, and maybe not even with data science, but there’s other technologies that can give you better input.

But these decisions, the ones that I just said in a very lighthearted way, should not be in the purview of generative ai.

Luke: Yeah, no, that makes sense. And I think, yeah, it, it is a bit of training involved for professionals, right? Like on, on how to like how, how can that doctor use this power tool in a new way or whatever, you know, be mindful of like how that works.

I think that makes a lot of sense. Just so we can shape it up a little bit for people too. You mentioned UI earlier, like can you give us a little context on like what EI is, what you guys do there? And, and, and just so the folks might not be familiar.

John: Yeah, EY is, is one of the, you [00:10:00] know, everybody refers to ’em as the big four.

It’s one of the big four consulting firms. I, I’ve spent the majority of my professional career in software and technology firms. This is, I’ve I’ve been at EY for two years now. It’s the first time I’ve ever worked in a professional services firm. So EY has service lines that are related to auditing.

Tax advice financial services, mergers and acquisitions, things like that. It’s a 420,000 person firm.

Luke: Oh, wow. Wow. That’s huge. That’s north. That’s massive. Let’s say that you know, you’re like a fortune, fortune 500 company, right? And, and the executive team’s kind of coming in w with a mandate to, we need to be doing things with ai.

Are, are they hiring you to help them kind of navigate that space or is like a service to support that type of effort? Like, or like how’s ey hired by these

John: firms? It’s everywhere. Luke. I mean, EY is. The auditor of record for many thousands of firms around the world. We audit firms like Google [00:11:00] and Amazon.

Our consulting arm is hired to do all sorts of tech transformations and assessments and ai. I. You know, readiness studies and things like that. Our financial services firms helps all sorts of companies understand financial services and m and a helps ’em decide who to buy or divest or, you know, those kind of things.

So it’s a, it’s a, a multinational global, global conglomerate that is in every country in the world.

Luke: No, that’s awesome. These are like super important services that, that are provided by firms like, like ey, right? Average folks don’t necessarily know what that means, right? Yeah. Like, and so it’s interesting to kind of, they are like a pretty key part of role in, in, in moving these technologies forward, right?

Like with making sure that, you know, everything’s kind of compliant or, or things are, are squaring up how they need to be, right?

John: I mean, it, it’s interesting you said, you know, do firms hire EY to do these kind of things and, and that’s why EY hired me. Was, and you know, help the organization understand what to do [00:12:00] with ai.

You know, it was just coincidental that the Gen AI. You know, wave was crashing over the world when I walked in the door. I’d been here just a couple months and, and I was called up by the top executives in the company to come to a meeting and explain what we were gonna do with generative ai, which was quite intriguing.

You know, here I am. I walked into the door, into the door, I got a new job. Now I’m trying to find my way around and, and see how to use my PC and navigate the VPN and all the simple things that you have to do on a daily basis. Then you get a call that says, Hey, you’re gonna go present. Your AI vision to the top eight people in the company next week.

I’m like, oh, nice. That’s interesting.

Luke: Time and place, you know, it’s, it’s awesome. Yeah. Well, I’d imagine too that, you know, given the scope and the range of services too, you all probably have visibility into like, the whole gamut of everything going on, right.

John: We do. Yeah. We’re, we’re you know, one of the most highly regulated and most trusted companies in the world.

[00:13:00] So I spend a lot of time talking to our partner firms, technology firms like Microsoft, Amazon, Google, a couple others, a Adobe and others that I’m, I’m not thinking of right now. And, and one of the wonderful things about working at a company like this is that we get access to their roadmaps. You know, we talk to them about what they’re doing and what they’re not doing and, and what they think they should do.

And we actually help drive their roadmaps. We spend a lot of time with the senior technologists and the fellows at Microsoft and they explain to us what they think they’re gonna do, and we explain how we think that would work and our customer base and what our customers are asking us for. And in some cases we do joint development.

In many cases, they listen to what we have to say and take it into consideration. I’m not gonna say that we. Move Microsoft or Google’s roadmaps, but we have those conversations and what they, what they do is what they’re gonna do. And I, but I do think we have some significant influence there and we work with [00:14:00] some of the largest companies in the world that come and ask us, is it possible to do these kind of things?

Sometimes yes, it’s out of the box from the core technology providers and other times, no, we have to build some of this stuff on our own and then we bring it back and say, Hey, is this something that we want to have as intellectual property? We’re gonna protect. Or is this something we’re gonna collaborate with with one of our vendors on and say, we think this should be in your core product.

Here’s what we did. How we did it Probably wouldn’t fit exactly like that in your tech stack, but here’s an idea that you might wanna run with.

Luke: That’s awesome. Yeah, and I, I really appreciate you shedding some light on this too. ‘cause I feel like people think of these companies doing the, this tech as like each one of them is kind of in their wild west silo of things and it’s very almost monolithic in nature across these big companies.

But, you know, we’ve spoken with a lot of folks in the academic research community and, and folks like yourself where. You know, there are a lot of different perspectives and points of [00:15:00] view, a lot of supporting collaborative effort that happen around these technologies. It’s not necessarily some rogue element in a, in a lab somewhere doing a bunch of crazy things, right?

Like, you know, yeah. We have’s to know

John: We have that too. We have that too. I have a, a, a blue sky research team that I give ideas to and say, go see if you can make this work. And they go do it. And it’s. You know, sometimes it works, sometimes it just flops miserably. But yeah,

Luke: we

John: do that too.

Luke: That’s cool. Out of all of that spread of, you know, companies and, and everything like that, is there any, you know, successful AI implementation whose approach or offering just really stood out that you can share?

John: We did one, it’s been in, in production now for about 18 months. If the audience out there is, understands the James Bond franchise, they might be familiar with the character q the quartermaster that has all the gadgets, you know, the, the shoe phone and the, the gun and all this different kind of stuff.

Well, we built a system called [00:16:00] EYQ, and it’s the name Q comes from the James Bond character. And the idea was that it’s a generative platform that. You know, of the 420,000 people at ey, 300,000 people could use it. The only reason the other 120,000 can’t is that they’re in China and Russia and Iran and Iraq and North Korea and all the places that have export controls applied to them.

But you know, we built that system. We measured it at the 11 month, 11th month mark to see what the adoption was, and we had. 299,000 people on the system. Wow. So we virtually had a hundred percent adoption. So around the world, these people are on the system every day piling in all kinds of information, comparing legal documents, generating new proposals for clients, doing all sorts of different things.

So, you know, that’s, that’s probably one of the world’s largest gen AI implementations and, and we built.

Luke: That’s wild. That’s an amazing take rate. Right? Like I, on on that, I mean like was it the, the ingredients or kind of [00:17:00] the approach or, or the product? What do you think was so key in, in getting that large of adoption for it?

John: Well, the, the demand was, the demand was there. You know, people want to use gen ai, they want to learn about it. They want to know about it. But we knew that we as a team knew that inside a firm like ey, there’s the bright lines. There’s a box that you can be within, that everybody would be happy. And, and those lines are drawn by the governing functions, InfoSec, risk management, data privacy, and the legal team.

So I personally spent a lot of time with those four governing functions saying, okay, we’re gonna build this global system. What are the things that would make you come back and stop us from sending out something in the world that everybody in the, in the company could use? Everybody that’s has access anyway, and they laid it out, you know, they said, Hey, you can’t have it retain any information.

Prompt goes in, response comes back, everything gets wiped outta the system. You cannot [00:18:00] have any data retention whatsoever. It has to be secure, it has to be bulletproof, it has to have all these operational disclaimers in front of it. They gave us a whole laundry list of things that we had to build into the system for them to be okay with it being a made available to that many people in the world.

And we did. We took that as a laundry list and we said, okay, that’s fine. Here’s all the constraints and you know, the limitations we have to live within. And none of none of them were onerous. They were all reasonable. So we built the system. We said, there it is. We gave it to all the governing functions.

We said, test it, see what you think. And they came back and said, yeah, it’s great. Send it out.

Luke: Awesome. Thoughtful approach. It’s fantastic. Well, on, on the other side, I think, I know you’re an adjunct professor at the University of Michigan, right? Do you have any advice for students that wanna stay ahead of the tech curve or might be just kind of venturing into this space?

John: You know, the world is really kind of fluid at this point, which is good and bad. Some people really love the dynamism and the [00:19:00] undefined nature of it, and other people are a little wigged out by that. They’d like it to be a little bit more controlled and a little bit more slow. And, and, and what I say to, I was just saying it last week to my students.

I have a mix of graduate students and undergraduates. Towards the end of the class, we started talking about what, what does the world really want from us? You know, that’s what the students were saying, you know, how can I make myself the most employable person I can be? And I said, look, it, it’s, it’s a mix.

You know, you don’t want to be the greatest algorithmic, mathematical thinker in the world, and you don’t wanna be. The most loosey, goosey, free flowing poet, either wanna be a blend of those things. You wanna be smart and literate. You wanna be empathetic and communicative, and you want to be able to engage with people in a way that you are.

I. A trusted provider of services to them that’s gonna help them achieve their objectives and goals. And, and I think, you know, and I’ve been talking, I’m on the, the [00:20:00] board of, of the University of Michigan. I teach at Michigan. I’m on the board at Penn State and, and the University of Texas at Austin. They all ask me, you know, similar things like, how do we get students ready?

How do we make them, you know, as marketable and, and as attractive as possible? And I say, look, you know, you want to, I think we’ve always been skewed. You know, we’ve either taught people to be great mathematicians or we’ve taught them to be great poets. We don’t really teach them to be good mathematicians and good communicators.

I know kus would say, of course we do. Yes, we do. We do all that. But the point is, is that I’m teaching off my second book, this book right here, buildings and Analytics teams. That’s what the class is about. Mm-hmm. And, and I tell the students, look, I’m going to give you a 16 week real world view into what the world looks like.

This is really what the world is and this is what the world wants from you. So, you know, I think that’s what we need to do in education, is allow [00:21:00] students to be the best possible people they can be.

Luke: You mentioned your second book, and I understand you’ve got a fifth book out now, like, yeah. What’s that fifth book about?

John: Yeah, it’s coming out March 10th, which is a month from now. So it’s it’s exciting. The fifth book is called The Path to a GI. So I take people through a journey through the book of looking at AI ready data, and then I have three sections where I look at foundational ai, predictive ai. And then generative AI and causal ai.

In the last section of the book, I talk about those being merged together into composite AI in the next 20 to 30 years, and then I talk like the next, a hundred years after that is where we’re gonna achieve artificial general intelligence. It’s a pragmatic, practical view of where we are today, where we have been in the last 70 years, and what it’s gonna look like going forward.

Because you hear all these people, Elon Musk and Sam [00:22:00] Altman and all these folks and others, Ray Kurzweil say A GI is here. We have a GI today. And that’s just not true. That’s actually frightening a lot of people, so, right. You know, I listened to that and I listened to it and listened to it and I said, you know what?

I’m gonna write a book that is the counterpoint to that narrative. It doesn’t get down into hyper parameter tuning. It’s not that technical, but it’s not Herman Melville either. I’m very excited about the book myself. I’m the author. I guess I should be excited about it, but I’m really excited about this book.

I think it’s for the people who are your audience, Luke. It’s for the people that are sitting there thinking. Geez, I really have never heard of causal ai. I don’t know how it relates to traditional foundational AI or generative ai. How are these things gonna move forward? What’s gonna happen to this technology and how can I be ready to use them in the best possible way?

That’s what the book’s for. I.

Luke: I love that, that alongside with what you were talking about around your second book and, and the, and the course, right, where it’s something that’s missing [00:23:00] from a lot of both education and, and just kind of practically like these are things that people have to tend to have to learn through actually doing them, right?

Yeah. There’s a void in like, what are the real pieces I need to know and, and the real things I need to be concerned about, just ‘cause there’s so much hyperbolic junk, there’s junk, there’s a

John: theme in, in these five books that I’ve written. You know, it’s, it’s all about. I’ve been doing this for nearly 40 years, and I have made every mistake you could possibly make.

And I’m like, all right, maybe I can help people not make that many mistakes. Maybe I can cut off some of the U-turns or the diversions along the way so they can have a straighter path through analytics and AI and data and, and that kind of journey. So I guess that’s my lot in life, is to write books that help people have a straighter path to success.

Luke: That’s awesome. And, and I think too about, you know, you, you’re in a really interesting position too because you’re both seeing young talent coming up through the education system, right? Like, and then you’re, and you’re also kind of [00:24:00] seeing the brightest talent out there working in these companies in this space too.

It must be a really interesting mix of, of views that you’re seeing kind of as this space really kind of rockets forward. I mean, are, are you optimistic about, about where things are going based on what you’re seeing? Yeah, I am,

John: as a matter of fact. It is, it’s a great observation. Luke. I haven’t really thought of that.

You know, I, I am in the classroom. I’m talking to students that are juniors and seniors and undergraduate and graduate students. And then I see, you know, like you said, some of the best and brightest in the world building this technology in my day job. And I’m also in front of some of the boards of some of the largest companies in the world.

So I see how non-technical executives are reacting to this. Technology and, and I am incredibly bullish. I am incredibly positive on what’s gonna happen with ai. You know, the one thing that I do want people, and not many people in this audience probably have this perspective, but I’ve had a number of people, non-technical people ask me, when are we gonna be done with [00:25:00] ai?

You know, when are we, I’ve had people ask me, when are we gonna be done with data science? Yes. And I’m kind of like, that’s kind of like asking when are we gonna be done with math? You know, right? It’s never done right. This is the world. This is the future. This is where we’re going. There is no post ai future that doesn’t exist.

You know, whatever we have today is as bad as it’s ever gonna be, and tomorrow it’s gonna be a little better, and tomorrow it’s gonna be a little better. And after that, it’s gonna be a little better, and it’s all going to be AI enabled. That’s just the way it’s gonna be. So, yeah, you know, if you don’t like AI and you don’t like math, then it’s gonna be tough for you.

Luke: Totally. No, I mean, I think that makes a lot of sense too. I mean, these things, they’re in the space and obviously there’s a lot of hype and things like that around it, but it’s also very iterative. Like you say, everyone’s not gonna be replaced. All of a sudden, you know, the, the roles are gonna change, right.

The tools are gonna change. Right. That makes a lot of sense.

John: It’s gonna be a, it’s gonna be an augmented future. You know, you, [00:26:00] you’ll mm-hmm. Hear that. Mm-hmm. You know what is becoming an old chestnut. Now, you know, AI may not take your job, but an an employee who’s good with ai, might, none of us are rejecting calculators.

None of us are rejecting voice recordings. No. None of us are rejecting computers. Right. And the idea that you’re gonna reject AI is, is in the same vein. It’s

Luke: futile.

John: Yeah.

Luke: And we, we spoke a little bit about like university level education and, and there’s a corporate side and, and. If I understand, you’re also kind of involved with mark Cuban’s foundation’s AI bootcamp.

Like how do you see, like these, it seems like there are some of these kind of bootcamp type incubators that are rolling out there. How important of a role are they having in, in this space from your vantage point?

John: I think the role is incredibly important. I think the impact is growing. When I got involved four years ago, there were, I think there were like eight boot camps around the world this last time.

I think there were like 36. So you know, there’s more and more of them, and each of those boot camps has somewhere between 20 and [00:27:00] 50 kids in it. And every time we do it, many of the kids are switched on, but you always see one or two that you know really get it. Mm-hmm.

You know,

John: before they started the bootcamp, they just couldn’t see themselves.

In the AI future. They just didn’t have a, an idea of how that could happen. The Mark Cuban Foundation does a really great job in, in not just bringing kids in and going, here’s ai. You’re gonna learn an algorithm. You know, there’s more exercises about, Hey, how do you wanna work on your, prioritizing your playlist in Spotify or

mm-hmm.

John: How do you want to organize your search for your college targets that you might go go to? You know, there really actually. Wrap that experience in things that the kids care about. So you know, they get engaged ‘cause they care about music or their education or whatever it is, or future role in industry.

And then they find

Luke: out that, hey, they have an affinity for ai. That’s awesome. That’s awesome. You, you kind of get ’em exposed to it [00:28:00] and that’s great. You mentioned earlier too, like at ey you all are working on a lot of, you know, accounting and, and I imagine regulation. You are very mindful on that front, like what’s your current take on the state of regulation and the ai.

Space and how that’s been evolving as, as this latest space. Yeah, it’s

John: a, it’s a great question and I think it’s a very important question to talk about. Probably eight years ago I would’ve been like, no regulation. You know, we’re all here to do good and, and, you know, use data for good purposes. But the AI and analytics industry has, has become global and there’s so many people involved in it.

You know, when I got involved in it all those decades ago, it was really a cottage industry. There was very few of us that were actually doing it. And the people who learned it like me, learned it at the knee of someone who was a master at it. It was almost like a Jedi kind of thing. You know, you found someone who would mentor you and they taught you about it, and you took it on and you know, and you went on and did things.

And I, I, I don’t wanna say everybody was like that, but not [00:29:00] many of us really ever thought of using analytics for nefarious purposes. You know, the whole idea of deep fakes and spoofing people and doing analytics to bend elections and things like that, just. It didn’t occur to us, you know? Mm-hmm. We were there to try to use data to move things in a, in a certain direction.

It was definitely capitalistic, but you know, with an ethical and moral base to it. I really think that regulation is, is needed. A lot of people, six years ago, six plus years ago, when GDPR came out, you know, there’s all this hoopla that, you know, the Europeans are taking over the, the United States has no more sovereignty.

You know, these ridiculous things. Now that we’ve seen GDPR, the General Data Protection Regulation is what it stands for. And, and now we see the, the data, a EU data act. We see the AI Act and all these things are patterned off the same kind of regulations. And it really comes down to you shouldn’t use data, AI or you know, data and AI and [00:30:00] analytics to hurt people.

I. There’s really nothing you can argue about with that. You know, the idea is that you use it for good purposes. You don’t use personal information, you don’t misuse it, you don’t disadvantage people. From that perspective, I see no reason why we shouldn’t have regulation at the federal, state, and local level governing how companies use data and ai.

I can’t see why we wouldn’t do it.

Luke: Yeah, and I think a lot of the issues too around GDPR was that it was kind of late to the game in a way where Oh yeah. You know, by the time it was out there, like there already were these like big privacy problems that had scaled that weren’t through the adoption of just how, what’s gonna fund the internet, right?

Like is through, you know, advertising analytics. All of that. And and, and so it makes a lot of sense. I think that they actually, you know, approach it in this earlier stage where before there’s like this huge market fit for ai, you know, because then it can at least be a little more, less of a [00:31:00] surprise, I guess, for the for, for, for some things that are fully adopted.

John: You know, I’ve been doing this for quite some time and. You know, you really don’t need ingen ai, traditional ai, causal AI to do good work. You really don’t need a lot of the data that would get you in trouble in any way, you know? Mm-hmm. When I had a, a large data science team in my previous job, you know, when people would say, oh, we want you to work on this problem or that problem, and, and we would ask for data and sometimes they would append PII or PHI data to it, and we would just, we would stop ’em and say, we don’t want it.

It. You know, don’t even give us that data. We don’t even want to cut it off. Just do not give it to us in the extract. And they’re like, well, why? It’s all part of the record. And we’re like, we don’t need it. You know, we can do all the modeling and all the predicting and all the different things that we need from, you know, this data.

We don’t need to identify people, we don’t need to anything. We don’t need to know their names or their birth dates or where they live or any of that kind of stuff. We just need information about the phenomena we’re interested in.

Luke: It’s a [00:32:00] really great point too, and I think that’s one that people aren’t always aware of, right?

Because there tends to be, I mean, I don’t know. I was in ad tech space before I came to Brave and, and it was like. Every, there was such a hunger for all the different, you know, where can we get all the data? What, where all the data points, all this, that, and it’s really like, you don’t really need all of that.

John: Yeah,

Luke: yeah. No, it’s, it is great to hear that echoed there too. And, and we covered a lot of stuff. But is there anything we didn’t really drill down into that you, you might want our users to, or our, our audience to know about?

John: I think one of the things, you know, we’re all, we all get caught up in the math and the algorithms and things like that in.

You know, you, you find people that are bifurcated in their view. You know, they’re focused on the math and algorithms, and some people are focused on the data. The real magic comes together when they come together. I’ve never understood how people don’t see them as two halves, two sides of the same coin.

You know, the analytics and the data are inextricably married together. Mm-hmm. So, you know, if you’re coming into this field and you’re a young person or even an older person, you know, [00:33:00] start looking at it holistically. Just don’t think about, oh, what am I gonna, how can I squeeze out more a UC or, you know, ROC or whatever I’m trying to measure for.

I don’t really care about the data, or I only care about the data and I don’t care about the analytic treatment. I have more of a holistic view. I think that that’s a great, those are better outcomes when we think of it from a, a more, a generalized hole. Great point and good

Luke: advice. Where can people find you if, if they want to follow your work or, or, you know, follow what you’re saying out there?

LinkedIn,

John: I’m always on LinkedIn. That’s my, that’s my platform. I don’t use any of the other socials at all. I’m always on LinkedIn, you know, if you’re in the analytic field, I’m happy to connect with you if you deal with data and analytics. I’m all about it, you know, I’ve got just over 26,000 connections and about 28,000 followers and, and I’m only focused on data and analytics.

You know, if I get connection requests from people that have no connection to the industry, I just decline. But if you’re in the industry and you want to understand and know what I’m thinking about, [00:34:00] I’m posting multiple times a day on LinkedIn. Of course you can see some of my books over my shoulder.

All my books are on Amazon, all four, four of ’em now, and in a month, all five of them will be there. So go out and take a look at my ever-growing library of writing and pick and choose, which resonates for you.

Luke: Excellent, Sean. Well, yeah, I really appreciate the amount of time you gave us today and I’m sure the audience does as well.

And love to have you back again sometime too. And, and yeah, hope folks pick up your book too. But yeah. Thanks. Yeah, thanks again. Have a great day. Alright, bye-bye. Thanks. Thanks for listening to the Brave Technologist Podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave Browser, you can download it for free today@brave.com and start using Brave Search, which enables you to search the web privately.

Brave also shields you from the ads trackers and other creepy stuff following you across the web.

In this episode of The Brave Technologist Podcast, we discuss:

  • How AI technology has evolved from working with just 10% of the world’s structured information to accessing 90-95% through generative AI
  • EY’s implementation of EYQ, a generative AI platform with nearly 100% adoption among 300,000 eligible employees worldwide
  • Risks associated with generative AI and the importance of education in preparing the next generation for AI
  • The necessity of regulation in the industry, and the holistic view needed to understand the interplay between data and analytics
  • Preparing students and the next generation of our workforce for employment

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.