Back to episodes

Episode 33

AI and Neuroscience: Global Impacts & Redefining Humanity

Benjamin B. Bargetzi, CEO of Bargetzi & Company Group, discusses how cultural and regulatory differences influence AI innovation and adoption across China, Europe, India, and the US. He also explores AI’s role in transforming medicine; the ethical implications of AI in military applications; and the urgent need for global AI literacy to prevent economic disparities and job displacement.

Transcript

[00:00:00] Luke: From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host Luke Malks, VP of Business Operations at Brave Software.

[00:00:21] Makers of the privacy respecting Brave browser and search engine. Now powering AI with the Brave Search API. You’re listening to a new episode of the Brave Technologist, and this one features Benjamin Bartghietzi, who is a keynote speaker focused on AI and neuroscience. He’s carved a dynamic career path from leading roles at big tech giants, such as Google and Amazon, to becoming an international entrepreneur, advisor, and investor.

[00:00:46] Benjamin researched and studied the human brain at world leading universities in Oxford, London, Singapore, and Zurich, where his research focused on how the human brain deals with change, risks, and uncertainty. Today we covered how countries [00:01:00] globally differ in their attitudes to risk and how this impacts technological innovation.

[00:01:04] The role ChatGPT has played in overcoming our concerns around AI and how we can continue to face our fears to embrace the inevitable. Along with boundaries he feels humans should be creating with technology as a response to our decreasing attention spans. Now for this week’s episode of the Brave Technologist.

[00:01:21] Benjamin, welcome to the Brave Technologist. How are you doing today? I’m doing great. Thank you so much for having me. Excellent. No, I’ve been looking forward to this interview. Just to kind of set the table, why don’t we give the audience a little background? Like, how did you kind of get into what you’re doing and like, what do you do find yourself doing these days?

[00:01:41] Benjamin: It’s actually a very good question. So basically, originally I’ve been a neuroscientist and psychologist, just always loving how people think, feel, and interact with the world. But then like, as of 2016, I transitioned more and more also into the research of how you could create artificial brains. So my burning interest for like robotics and then also [00:02:00] artificial intelligence, because in the end AI is basically just, you know, especially in the early days of 1956, you were trying to create neural networks like.

[00:02:09] Rebuilding the neural brain in the machine. And so I think this intersection of the machine and technology has always really fascinated me long before the current hype. And then honestly, I was that sad on becoming a professor, but then I got hired out of research into Amazon and Google, spent my time more in the tech companies.

[00:02:27] And now I’m traveling the world, being a keynote speaker on those topics, helping companies like transform with a consulting company, but also running an investment fund to help build the next Google, so to say. So just everything where people, business and machines come together is something that makes my heart beat a bit faster.

[00:02:46] Luke: That’s awesome. No, I love it. I love it. It’s kind of like a academic meets practical. And, you know, like you mentioned, you worked at, Google and Amazon, what did you do at either of those? Was it around AI or related areas?

[00:02:56] Benjamin: So in, Amazon, I was more like a program manager, helping like tech teams [00:03:00] work together more efficiently.

[00:03:01] So we did apply like machine learning, but you know, just like. Classic Python stuff there, you know, more like helping the ship keep sailing without like having too many leaks. So I would say it was of course like machine learning based. On the other hand, it was not deep tech research. And then in Google, I was more like on the key account management side.

[00:03:19] So talking to partners of Google, how to make use of Google’s technology to like drive their business and, you know, optimize their performances. I did spend a bit of time there, you know, just out of my own passion. I’m working with Google DeepMind and, you know, talking to people and like having a really big passion for what they’re doing.

[00:03:35] But I would say my core role was really more on the intersection of business and tech, not in deep tech research. So to say, even though one of the collaborators of Google DeepMind was Professor Carl Fristen, who was one of my scientific mentors back when I was in academia. So there was like this nice triangle of, you know, business application and nerdiness that I was involved in.

[00:03:55] But it was not my core function. It was just like something I kept doing a lot of passion [00:04:00] on the side, so to say.

[00:04:02] Luke: That’s awesome. I think it’s, it’s cool to kind of having that mix in your background to where, and, and these companies you have access to a lot of, they call them campuses, right? Like it is interesting kind of like hybrid model.

[00:04:12] What’s your take on kind of the two different AI approaches that you’ve seen between Google and Amazon? It sounds like one was more practical, like product side, like at Amazon, right? Like maybe we can go into a little bit of that.

[00:04:24] Benjamin: I think you have to really like, maybe quickly, like look at how the two companies differ.

[00:04:27] So I think Amazon is extremely scalable and fast. It’s crazy, right? It’s like, Amazon is like one day, one morning you have the idea, okay, let’s take over this market and then, you know, by end of week it’s kind of done, you know, like in the, of course, in that spirit, right? And then that’s why, of course, for Amazon, you know, you have these leadership principles that define the culture and how you operate.

[00:04:48] So something like frugality, you know, don’t do things that are too fancy, but you know, Stick to the basics, bias for action, be fast, you know, like learn and innovate. So Amazon is really about maximizing speed and [00:05:00] customer focus. but not making profits necessarily, but just volume.

[00:05:03] And then, you know, long term planning to be more profitable. So Amazon very famously, right for 20 years, didn’t make profits, but just kept increasing volume until then. What’s the time to, you know, turn it into a mega business. Google, on the other hand, has a really strong innovation and research culture.

[00:05:19] So Google is really, you know, you have to 20 percent projects where one day of the week you just spend not on your core job, but thinking about cool stuff. You could innovate at Google to make the company better. Or Google has this really, really strong emphasis on personal development, on like learning courses and all these kinds of things.

[00:05:35] Right. So from that perspective, I would say also, if you look at the market, right, like Google has deep mind, Google has like an, and its own AI research Institute, Amazon doesn’t have the same thing. Right. So I would say Amazon is extremely focused on delivering pragmatic value, long term bets for sure, but really.

[00:05:53] You have to quickly fix a problem with that, you know, and like create speed and momentum with it. Whereas for [00:06:00] Google, they really want to give people the time to, you know, leisurely let the brain roam and come up with new things. And I think that’s also like, if you compare the AI approaches for Amazon, it’s really something you implement to work more efficient and fast and, costs smart and for Google, it’s almost an intrinsic thing.

[00:06:17] Of course, there’s a capitalistic reason behind it. Yeah. But at the end there is like, you know, really more this academic, educational, almost like, you know, philosophical angle to it. So I would say that is definitely something that’s deep in the culture difference of the two companies, but then what, which you can really also feel in their way, how they think about machine tools and like, you know, AI is one of them.

[00:06:40] Right. So I would say Google is a bit more playful, Amazon, a bit more, this has to bring money or, you know, let’s not do it.

[00:06:48] Luke: Yeah, no, that’s great. I mean, it’s super helpful too, because I think, you know, a lot of listeners might not be aware of some of the nuance, right? Like, I mean, they might, they probably engage with both these companies all the time, but you know, there, there is just a different kind of an ethos and [00:07:00] culture and, objectives, right?

[00:07:01] Like that, both of these hit, but it’s super cool that you’ve got a background with both right directly where it’s kind of like, you see mix of all worlds kind of coming together there. You work all around the world, all across the globe in different areas. How do you see cultural differences impacting kind of development and adoption of AI globally?

[00:07:20] Benjamin: That’s a very good one. So I do move a lot between like Europe and then well, if you want to count UK as Europe, I think London is almost like a different world from Europe. So Europe, London, and then like Singapore, China and US.

[00:07:31] I’m moving around all of them. So this is really the regions I know best. And just from that, I can definitely say like I think it’s always kind of always the same, the US is kind of like leading in it, you know, adoption rates are quite fast. People are much more open to failure. They’re like, okay, if this AI stuff could triple our revenue, like let’s just risk it breaking our operations, you know, whereas like in Europe, you have a much more, you know, focus on Is this ethical?

[00:07:59] [00:08:00] Is this data privacy? Are we sure there’s no single risk involved in this, right? And it’s really more like this. In Asia, it’s a bit like, you know, I’m mostly in Japan, China, and Singapore. I think Singapore is extremely, like, progressive in just adopting new technologies. I think that’s really one of the key strengths of the country.

[00:08:15] I mean, Switzerland, where I’m from originally, Is I would say also decently strong in AI. Like that’s often gets forgotten when you list out all the AI powerhouse. I think Switzerland with ETH Zurich being centered here and with Google Zurich, you know, it’s like Google’s main development hub in Europe is also in Zurich.

[00:08:32] So I think there’s a lot of, same for, I think meta and for Microsoft and for. Apple has a smaller office, but you know, you get the point. Like, I think a lot of people really invest in Zurich in the end. So I think that’s also often gets forgotten how much energy and new patents come out of this region.

[00:08:45] But I would say that’s definitely in terms of culture, there’s such a big difference in the openness to the risks, because basically, to some degree, you’re just like, hey, let’s open up part of our data to like an algorithm that will make [00:09:00] it better stuff out of it than humans can. Right. In the U S the willingness to that, as I would say, quite big, as long as they see returns on investment, you know, in Europe, there’s a lot of discussions.

[00:09:10] We have the AI act in Europe now in the, in the European union, which I mean, the U S doesn’t have in that form. And then in China, of course, it’s, it’s like, A bit of different cultural system, political system where you have more, the government makes decision, Hey, we want to invest in AI and everyone does it.

[00:09:25] Right. And so that’s kind of like a very different momentum. You see as well, India as well. I mean, very great startups coming out of India. I think a lot of investors don’t have that on their mind, especially in the AI field. And then I mentioned Japan, just because I think The general openness of Japanese culture to like technology is amazing.

[00:09:43] Like they have like robot elderly homes. They have like Hakurei Mitsu, sorry. the, they know that the singer, the, hologram singer, you know, and like basically just this intersection of technology and normal life. I think that also gives different. Like, you know, fertile grounds to like grow [00:10:00] AI quickly.

[00:10:00] So if you ask me like that, I think culture is definitely a big one. And also if you look at venture capital investments, I mean, most of it is again, San Francisco, Silicon Valley, the U S right. And then a little bit is happening in London, more regulation technology, Switzerland, you have some deep tech research, but again, like I think most of it is again, centered in the U S which explains again, why they ought to probably benefit the most from the current trend.

[00:10:25] So it’s, it’s sadly. Yeah. It’s also like, I see a lot of like thought leaders in the field complaining that the more Europe regulates data privacy and they know AI development, the more people are just going to move to the U S because there you don’t have it in that degree. Right. And I think whether you agree with the ethics of that or not, that is totally different discussion.

[00:10:43] I think the economic reality is there that the culture is very much determining how much Experimental freedom we allow to people and how much acceptance for failure. And for whatever reason, I think people have watched too much Terminator movies with AI, it seems that there’s so [00:11:00] much, very harsh rejection of trying in Europe, especially that, you know, everyone’s like, Oh, someone wants to build, I don’t know.

[00:11:07] a sales tool for AI, this may become the next Terminator, right? Of course, I’m exaggerating now, but I think there’s a very, very strong regulation focus now in Europe around these topics, especially also when it comes to how are people’s data being used? How can people be targeted? And I think those are very fair questions.

[00:11:24] The problem just is, we While Europe is debating philosophically, you know, ancient, ancient Greece, while we are debating like, what’s the ethical right way to do the rest of the world is just like rushing ahead and sorry, the rest of the world. I mean, the US and China and everyone else I think is kind of also trying to juggle in between.

[00:11:41] Luke: Yeah, no, it’s great. Great insight. And I think, yeah, it’s one of those things where there is a ton of hyperbole out there, right? Like, because it really kind of taps the imagination in a way that a lot of technology, other technology doesn’t necessarily do it. On that kind of note, given your neuroscience background, Do you have any insights around the neuroscientific realm [00:12:00] that might help us improve our daily decision making and mental well being with AI or, or things that you’re concerned about?

[00:12:06] Luke, now you just opened the Pandora’s box. Let’s do it. Let’s

[00:12:10] Benjamin: do it. So when it comes to neuroscience and AI, I mean, one thing I like to talk about is the free energy principle, which is like a neuroscientific theory. Well, also from thermodynamics, which tries to optimize machine learning models. So that’s more the technical side.

[00:12:25] We can talk about that later. But then if we talk about the user side, if there’s a lot of intersection, right? So one thing really, I think what I just mentioned is a very good kickoff for this. Why is AI so different from other technologies? I mean, you have never seen. People trying to a CRM like act or, you know, even metaverse or something like people don’t try as hard to limit AI as they try to limit metaverse glasses or whatever, Apple vision pros.

[00:12:51] And I deeply believe that the reason for that is that people don’t like how close the machines come to being human in the sense of that. [00:13:00] AI really as a field AI promises that they will replicate human intelligence and maybe even improve it right Apple vision pro or glasses or the you know, a crm system or whatever.

[00:13:10] They don’t promise that they promise to be Entertainment or business features for us. But AI, even though it’s being sold as a business feature in its philosophy in the 1956 Darth Mouth, you know, envisioning it is supposed to replicate the human brain in the machine. And I think that is actually something that goes very deep to the human heart.

[00:13:30] So you had these, you know, human crises. You first have Copernicus, right? Like stating that the earth is not the center of the universe, but just like. One stupid planet among like many others. Right. And then you had Darwin being like, Hey guys, by the way, you’re actually not really special beings. You’re just animals, right?

[00:13:48] You are, who have your instincts, your behaviors and all these tribal behavior. And then came like Freud and was like, Hey guys, by the way, like all this rationality, this brain that you’re proud of. [00:14:00] The very least part of it, you’re actually controlling. Most of it is just automatic, right? And now basically, I mean, to some degree called us robots, right?

[00:14:07] He said, Hey, most of you as a robot, you didn’t use these words, but, and now the fourth kind of like crisis for the human identity is coming in the form of. That we have created something that may very well overcome us in terms of the natural hierarchy in the world. I mean, this is really the Terminator scenario and everything.

[00:14:25] I’m not a big fan of those. I do see the risks, but at the same time, I think it’s very black and white very often that, you know, Using JetGPT to give you cooking recipes and, you know, Terminator robots taking over the world. There’s a lot of gap in between, and I think the real AI experts are the ones who try to unfold that huge gap in between.

[00:14:45] But I do believe that this AI, you know, like in the 1990s, you had like Garry Kasparov losing, right, to IBM’s AI in chess. And then we as humanity made this kind of contract, okay, The machines can do the data and the analytics [00:15:00] better, the mathematics, but we humans do creativity, art, emotion, and all these things.

[00:15:06] But well, you know, now you see, okay, mid journey and Delhi, they’re just drawing much more beautifully than I ever could GPT. If you prompt it, right, it gives you amazing texts. Like that are, you know, just pure literature. Right. And at the same time, you have something like. empathy where we say, Oh, there’s definitely a very human quality, but there’s this MIT doctor study, right?

[00:15:26] That people find machines more empathetic than human doctors. Right. And I think that’s what we really feel now that the machines are getting better and better at stuff that we really, really consider very human. And I just believe from a, from a dis perspective, that’s the reason why some people have such a strong backlash.

[00:15:44] Some of them call AI a hype. Some say it’s like, you know, completely overhyped, but I think a lot of people are actually scared. And that’s what I see a lot when I’m on stage and talk to the audience afterwards being unemployed, you know, being replaced by the machines. People don’t have this fear of Apple [00:16:00] vision pros or with a CRM, right?

[00:16:01] But they have these fears with these machines. And I think a lot of this fear comes from not understanding what they’re actually are. And to link to the second part of your question, I mean, definitely the machines are going to change our brains and how we interact with the world. I mean, If you look at like attention spans, right?

[00:16:19] I mean, this is a study. It’s a bit like debated, like there’s a lot of semantics around it. But the fact is there that since the introduction of smartphones, human attention span keeps dropping, you know, it’s gets harder and harder for us to focus on one thing. And now generation TikTok and everything, right?

[00:16:34] It, we are getting more and more used to this quick dopamine rushes for our brain. And therefore our attention is getting more and more divided. That’s like one very simple example, how. The machines we use are not just tools, but they’re actually also recreating how we interact with the world. And then you can think about what happens to children that grow up in the metaverse, you know, that are using the metaverse all the time.

[00:16:57] They are used to this very [00:17:00] beautiful photorealistic environment. Will they be more sad or sadder in real life because it never matches the, you know, the illusion they see? potentially. I mean, also depression, right? Anxiety rates are on the spike since the introduction of social media. So I do believe that definitely like the more we use machines, there’s an impact on us.

[00:17:19] And I mean, I’m leaning a bit out of the window here, but from an evolutionary perspective, you can argue that the task of evolution is to adopt you very quickly. To your environment to make you succeed in that environment. And actually then if our brain starts deprioritizing attention span, because it learns that in this new world, it’s not needed and instead, you know, invests more in brain areas or skills that are needed.

[00:17:44] You’re actually doing the right thing. That’s also what I’m saying on stage. Always like, you know, yeah, our attention spans are decreasing, but maybe the new age just doesn’t need that anymore. Right. I think that’s three and that’s to give you like, and then I stopped talking to give you one last answer to your question.

[00:17:59] I mean, [00:18:00] I do entire executive coaching sessions with CEOs, one hour or 12 week program. On how to use AI tools to do stuff they took 20 hours before to do it in 20 minutes. And I mean, this kind of productivity speed, I do it in my own life. I mean, I automate every single thing I can and just increase my productivity.

[00:18:19] However, I have very strict rules and you can try that. That Friday evening to Monday morning, my phone is off and gone. And so is my laptop and everything. So every single weekend I’m gone from the planet. My phone is off. I have a personal assistant who does my social media posting for me in that timeframe, because I’m not going to touch my phone or laptop on the weekends.

[00:18:40] I’m going to mountains. I do hiking. I detox the dopamine kicks in my brain. And I think that’s really because I’m born from both of these worlds. On one hand, the machine world, where I really love the power it brings and to speed and use that in my business life or for my clients, but at the same time, I really need this [00:19:00] neuroscientific.

[00:19:01] psychological piece that comes with being offline in nature, in a cold bath, whatever it may be. And I do advocate for that strongly, that people don’t just use technology only, but then of course also are not stupid and deny the enormous benefits that you can achieve in life if you take these tools seriously.

[00:19:20] Luke: No, that’s great. I mean, I want to touch on a couple of those things because one, I mean, I think like, back to the kind of around the first points that you brought up, right? Like where the fear that people have, especially when you first kind of are engaging with them now that like Chad, GBT has been kind of accessible for almost two years now, right?

[00:19:36] Given how you have all these different touch points across like the corporate world, but also with people that are using this stuff. Have you noticed over time any shift in people’s fears around this, just as they’ve got more access and exposure to it? To some degree, GPT is a

[00:19:51] Benjamin: curse for the AI field because I think a lot of people just confuse the limits, like the, the, the potential of AI with GPT, they’re like, Oh yeah, it’s just going to [00:20:00] be a very good text, right?

[00:20:01] I’m like, nah, nah, this stuff could actually, it could solve cancer. Humans haven’t managed in two and a half thousand years of medicine research, we haven’t managed to solve cancer. Maybe if you put AI into research, which is one of the biggest things I’m investing in medical AI is like my venture capital focus and neuroscience tech, we could solve this.

[00:20:19] We could solve cancer and then someone has to argue very hard to me how AI can be a bad thing. It can solve cancer, right? On the other hand, however, I think GPT for a lot of people was a really great first access to interact with these tools. And I think. The reactions to it are awe, like, you know, people feel like, wow, this is like amazing.

[00:20:39] This is like crazy. Right. What else could it do? Another one is cynicism. So you’re like, you know, okay, look at this. It sometimes makes mistakes, you know, not like humans never make mistakes, but okay. It’s sometimes therefore it’s all overhyped. Right. And I think if you ask me like that, it’s I think it’s the fear is definitely changing by people using the [00:21:00] tech.

[00:21:00] That’s also in the neuroscience of fear. It’s one of my favorite, like findings humans beat fear by facing it. And that’s, that’s like in our biology, if you’re scared of spiders and you see a spider in the room, your brain lights up, your amygdala almost explodes, right? But if you in therapy put a spider on your shoulder, over and over again in small doses, you actually decrease the neurological reaction to it.

[00:21:25] And at some point you’re cured from the phobia, which is insane. So humans beat fear by facing the uncertain shadow thing and making it a concrete reality. And I think GPT in that regard is very helpful because you take this shadowy, dangerous menace of like AI, and you actually expose yourself to it, which makes it much more, you know.

[00:21:45] Overseeable, you give it a name, you give it a face, so to say, and you make it controllable. And then the fear goes away because fear psychologically is just a loss of control, right? But at the end, if you, if you have a name for it, if you can control it to [00:22:00] some degree, you feel like you’re in control. I think that is good for the fear that we were talking about.

[00:22:05] However, the problem is that mostly those people and sectors, which, which will be heavily affected by these automations are usually not actually using chat GPT. So I think very often our tech bubble, we think like everyone is using that by default, but that’s actually not the case. So I’m still meeting people on a daily basis where I, they booked me for an executive coaching on, you know, AI transformation and productivity.

[00:22:29] And I have to explain to them what JetJPT is, like it’s insane, it’s actually crazy. And, and I think that’s really the bubble that you and I are in, you know, there’s like, there’s like people who love technology and are always on the new gadgets. It’s a very small portion of the world that actually is on this passion.

[00:22:44] And so to answer your question, yes, If you use it, it takes away the fear. However, some people get cynical and they’re like, Oh, the machine makes mistakes, therefore all of AI is a hoax. I mean, that’s like the negative side that you see. Even

[00:22:57] Luke: seeing how like, you know, people were really like kind of [00:23:00] wow factored by like, the types of outputs you get.

[00:23:02] Right. But then I’m even seeing it now where people are like, no, this, this feels like something or sounds like something that a prompt would output, right. Like where it’s actually kind of like being relatable in that kind of a way where it’s almost like, well, then what are you concerned about? Right.

[00:23:14] Like, you know, it’s interesting. I mean, like if you get back on another point too, cause I think it’s like super interesting and especially since you’re so into it, like I’d love to get some insight from you on it around kind of like how this can help the medical field, cause we hear that thrown around a lot.

[00:23:28] What are some of the ways that you see AI, you use cancer as an example, but just in general too, like how can AI help to improve how we learn and treat major diseases and things like this from your perspective?

[00:23:41] Benjamin: I think it’s just that, you know, as a neuroscientist, like you, you also study like genetics and like, you know, molecular biology and all this, and it just, at some point it just hurts your brain because you realize how incredibly magnificent the human body is, you know, how insanely complex and how individual that’s the hard thing for medicine, how individual [00:24:00] everybody is.

[00:24:01] And the big problem here is that just the The combination dimensions are insane. Like you take a drug into your body, right? And then the way in which it interacts with your nutrition levels, with your specific biology, with maybe prior physical trauma or even mental trauma that you had, how it reacts in the brain with different regions.

[00:24:20] If a certain one is active versus another one. This interactive effects, they’re just like, it’s impossible for a human to figure out what’s happening here. I mean, in Singapore, I once did like a genetics research for risk taking, and then I was running my normal regression analysis on my computer, and it took me four days to complete that, right?

[00:24:39] Just because the number of factors we had to check for were just like, incredibly, like, it’s just unfathomable for humans, right? And I think bringing in AI research, especially when it comes to like, Medicine development or diagnosis, you know, like I think that is, that’s the true power, even surgery, even you could argue nowadays one in 10 [00:25:00] lifesaving surgeries fails.

[00:25:02] So that means in the world global average, I mean, differs by countries, but basically if you’re like really hurt and you need a surgery or you’re going to die and you know, you, you cannot always go back to your home country. Sometimes you’re traveling somewhere and then you need the surgery. Like now you have a 10 percent chance on average that you’re, that you’re, that you’re not going to make it out.

[00:25:18] I mean, Okay. 90 percent you’re going to do. That’s great. But what if AI can bring this down to 0. 01%? And then, then some point you have to ask people, would you let yourself, you know, undergo surgery by a machine? And they say no, but then if you read, because they want a human that they can trust. Right. And I think this psychologically comes from accountability.

[00:25:39] We want the human to be accountable. If something goes wrong, right. That’s just 300, 000 years of homo sapiens, evolution that made our brain tick that way. But then if you, if you decide about the life of your child and you have to decide between your human bias or a machine that statistically does a hundred times better job, I mean, at some point we’re going to have to trust the machines to do a good job.

[00:25:59] [00:26:00] Right. And that’s why I think in the medical field, like all the way from using data to like understand a person’s condition. So, I mean, there’s also this famous case, right? Where there was like a child with like a very rare disease and like, it went to 10 specialist doctors and they all misdiagnosed the child, but then JGPT figured it out like in five seconds, you know, this kind of diagnosis potential is huge.

[00:26:22] There’s also like breast cancer, detection now. So, with AI that just does a much better job than the human could ever do, right? With the methods that they employ and all such things. So I think diagnosing things early on personalized nutrition, personalized medicine, specific for your body, drug development, and you know, I’m a very strong animal ethics advocate, hopefully like.

[00:26:45] Removing the need for animal testing with AI because animal testing is always a problem. You’re testing it on a very different life form than what you as a human are, right? And then you don’t get the full match. Like if it works for a mouse, like, yeah, great. Let’s see if it works for a human. [00:27:00] But if you can actually run simulations in AI on human data, You would actually even develop medicine that’s really made for humans without having to do human testings.

[00:27:08] Right. Which also is unethical. And so I think in this field, like all the way from developing to diagnosing, maybe even surgery, and as I said, maybe even the AI doctor that treats you better than the real doctor. I see a huge revolution incoming, especially since the medical field is still the one field in the world.

[00:27:27] With the most untapped data potential, they have the most data in the world. They have the most amazing stuff lying around, but their it systems are horrible. They, they’re completely outdated and they could benefit so much from AI. And if you think about how many human lives we can save by implementing this, I mean, you just have to get excited about AI, to be honest.

[00:27:48] Luke: Yeah, yeah, totally. And I think I totally agree. I think the ability for it to process like large amounts of data, like all the scans, all of the stuff that people are just get on a regular basis, right? Like, and being able to train off [00:28:00] of, it’s amazing. And I think we’ve, we talked a lot about benefits. A little bit on ethics too, like what things are you kind of concerned about around AI?

[00:28:07] I know we talked hyperbolic and Terminator, all that stuff, but like looking at the world, right? Like looking at how technology scales, right? Like exponentially and how ubiquitous people are aimed to make AI, like what stuff are you a little concerned about when you look at the The biggest

[00:28:25] Benjamin: fear I have is that we will not Use the enormous potential of AI it could have for the world because we’re too scared about alternative scenarios.

[00:28:35] So my biggest fear is that we’ll miss this gift of the gods, you know, Prometheus flame, that we’re going to miss out on it because we’re too scared about what the fire could burn instead of using it for cooking and giving warmth to houses. So that’s really like, I see that like this Prometheus flame that we are getting, we have to use it well.

[00:28:51] So that’s my biggest fear. We’re not going to use it for cancer research and all these things. You can see due to personal family history, the cancer topic is [00:29:00] very, uh, deep to my heart. But then also, if you’re talking about more pragmatically, I think some things that are definitely going to happen is it’s definitely going to widen the gap between poor and rich, for sure.

[00:29:10] I mean, obviously, like people who are now in developing countries have Much less access to these productivity boosters than people in developed countries. So their economic output is going to get weaker and weaker in comparison. It’s going to move the power again into very few hands, like of very big companies are already big.

[00:29:28] They’re just going to profit more and more from these technologies. I think also. Many people will lose their jobs, but a lot of them, you know, we’ll be able to just transition to new jobs if you have a more flexible and generalist education, but the more specialized you are, and especially the more you already today in a.

[00:29:46] society layer that is struggling maybe with the economic situation, the worse you get hit. So that’s the big thing. Like this unemployment wave is not hitting the people with office jobs or the people, you know, they’re going to lose their jobs, but they will [00:30:00] very quickly find a way, how to, you know, do an AI supported job.

[00:30:04] But then the people who are already now like very specifically trained and educated and maybe financially not so strong and not so good at adapting to change. And for those people, it will be very rough and as well, it’s going to widen the gap between rich and poor. That’s going to happen for sure, you know, me literally, I now do stuff.

[00:30:22] I took 50 hours a week before I do them in like two hours, you know, imagine how much more time I get to create economic value for myself. Right. Someone in a country, maybe, or even in a, in a, in my own country, in a different, like educational situation. May have like, you know, not the chance to actually do this and that means I just get a 50 times More benefit out of all of this than day.

[00:30:45] And that is like a really big issue, just by the way. Also why I founded the, ACID global future Institute, which is a nonprofit Institute where we educate the whole world about how to use AI tools and get on speed with modern technologies, exactly because [00:31:00] we want to avoid that people just because of their backgrounds get left behind.

[00:31:04] So it’s all free, like tutorials, how to get used to stuff. And that’s really like our attempt to help that the world doesn’t get too much divided between the, you know, tech savvy and the not tech savvy, but that’s for sure going to happen. So, yeah, if you ask me about risks, I think economic implications and, you know, the, the gap widening, that’s going to happen for sure.

[00:31:25] Unemployment is not going to be the problem for most people, but. For those who already struggle in life. So that is definitely a really big concern. And then as I mentioned, like I think AI should stay out of military. Like I honestly think if there’s even a 0. 1 percent chance that this goes horribly wrong, why would we do it?

[00:31:44] You know what I mean? We already have nukes, we already have such destructive forces to like keep each other in check. But Why would we create something even more dangerous? And I think there’s something in human back to neuroscience and psychology. There’s something in humans that loves to [00:32:00] experiment and try out stuff, even if it’s bad.

[00:32:03] So even if it’s obviously a bad idea, you still want to do it for the sake of curiosity and innovation, because there’s something in humans that inherently loves to try and play and try out stuff, which is all fine until we all, you know, get nuked. And I think that’s human curiosity and playfulness.

[00:32:24] accelerated by, you know, modern technology systems in the military sector could end in very, very bad outcomes, especially not even Terminator. I mean, if you look at wars, they say the American independence war was the last romantic war. You know, you had summarized the Romans, the Spartans. I mean, it was also violent, but at least there was some form of epicness to it, you know, like video games and stuff.

[00:32:47] But then came world war one. And I mean, it was like, it was such a brutal, like violent war due to the pattern technology that was available. World war two, the same, right? Suddenly you had like planes and bombs and nukes, it got out of control. So now [00:33:00] with AI, we’re just adding another layer that just invites disaster and escalation and maybe one very last point on that.

[00:33:06] So I really believe that if you look at human history, we had the first industrial revolution, which, you know, was more like steam engines and things like that. Then came the second one with electricity, right? And then you had the third one, let’s say, was the computers and the internet. And now you can debate whether AI is the fourth age of the world or just the peak of the third age, right?

[00:33:28] I mean, that one, you can, you can let the historians debate, but I think we have to take it seriously that what we’re doing right now is going to change the world as much as electricity, coal and steam did. And if we really put this in front of our eyes, like, How much change we’re going to see very quickly now.

[00:33:45] It makes you me, it makes me very excited for what’s to come, but I also understand how people can feel left behind by this speed. And that’s going back to what you said before about GPT and the fear. That is just my number one advice for everyone is to really [00:34:00] know about the field, try it out, you know, understand what’s happening because knowledge takes away fear and human speed fear by facing it.

[00:34:09] Luke: Yeah, it’s, it’s wild too. I mean, and I’m, I’m glad you bring up the military point, not to get political at all, but more from like a, just a perspective of just seeing how things are playing out is really weird where. You see kind of like how in some of these conflicts, right? You’ve got like almost elements that look like world war one, where you’ve got trenches and things like that.

[00:34:28] But then you also have people using like consumer drones with like some, you know, machine learning capabilities on the camera side and all these weird kind of like convergences that are like for a scrappy underdog makes sense, right? Like where you’re like, yeah, like, Oh, I can go buy this drone and.

[00:34:45] and survival and all of that, but also like just this really weird gruesomeness to all of it too. It is interesting like, to hear about it too, because it isn’t necessarily nukes. It

[00:34:55] Benjamin: reminds me of, of, in Japan you had the Sengoku, like this period where, you know, [00:35:00] The order collapsed and you had all these feudal lords fighting each other.

[00:35:04] And then you had the battle of Sakahara, I think, like, you know, the final battle of the war. And there it was, like, samurai with actual katanas and people with, like, Europe imported rifles fighting each other. So you had, like, samurai running around with a sword and people with rifles. And I feel it’s kind of like the same today.

[00:35:21] You have these trenches and these very, you know, Traditional world war one things, but then you have like drones and like insane crazy stuff. And right. And I think this is really this transition period where the old world meets the new one, but very soon the robotic and machine guided warfare is going to replace the old one, which again, it means yes, more productivity, more effectiveness.

[00:35:44] But I think war is something where I don’t think we need more effectiveness and productivity.

[00:35:49] Luke: No, no, totally, totally. Really interesting

[00:35:51] Benjamin: research. That would be my, my claim here.

[00:35:54] Luke: Yeah. Yeah, absolutely. I mean, like use it for the positive for sure. I think it makes a lot of sense. I mean, you know, you’ve [00:36:00] been super gracious with your time, Benjamin.

[00:36:01] I really appreciate it. Was there anything here that we didn’t cover that you might want our audience to, you want to let them know about? I think two

[00:36:09] Benjamin: things, maybe as a one thing I really believe in is, Routines in the world of speed. So what I mean by that is if you look at the human brain, dopamine and dopamine addictions are actually, I think one of the biggest problems of the modern world.

[00:36:22] So dopamine is our neurotransmitter that helps us decide what we want, what we strive for, what we crave. And in evolution, that was our best friend, right? Like it told us to. Strive for sugar because sugar gives us energy to walk longer distances. It told us to strive for information because information and gossip help us know what’s happening in the world around us But nowadays like we get all these dopamine kicks from all the food around us That’s way too much sugar and from social media reels which contain very condensed information and gossip, right to update our brains But that just means we’re constantly like dopamine over flooded [00:37:00] You And that explains also why depression rates are so high because these constant kicks that we’re getting from all these technologies and from video games from reels from all this nonsense, it actually makes us addicted to quick, quick inputs to quick stimulation.

[00:37:16] also reason why the dating landscape is horrible because people need quick kicks and all these things right and fundamentally what’s happening here is that we have all made our brains addicted to like easy pleasure and an easy life alcohol cigarettes whatever you name it and i’m not giving you a speech here that this is the wrong way to live but i just say If you want to break out of this dopamine trap, you really need discipline and routine.

[00:37:40] So for me, I get up every morning at the same time, like no matter if hell, breaks loose, I’m getting up, right? And I go to gym first because I know I’m becoming a very saddened person if I don’t follow through. Actually, a sad and a person I find almost disgusting in the sense of like the undisciplined, you know, just like hanging [00:38:00] around on my phone, like eating like wildly.

[00:38:03] This is really something I lived through like a long time, especially in COVID. And then I was like, no, stop, you have to stop this. You, you have to take back control of your brain. You cannot be led by, you know, social media rhythms, your phone and all this. This is not the life I want to live, you know? And then really, I think, you know, That really requires a routine like exercises, social, real connections, not social media likes, but real connection, real life, knowing when to turn off the technology, knowing what you consume.

[00:38:30] I like in terms of food, but also in terms of news and whatever, it is really like something that, that was a huge game changer for me. So I was in deep depression for quite a few years. And when I really started, taking control back of my brain, especially in such a hyper digital world. I just started feeling so much better.

[00:38:48] That’s really like my, my appeal to the audience, like to be happy. And then the last thing we sadly didn’t have time to cover it, but free energy principle, check it out. It’s my favorite nerdy topic. It’s a neuroscientific [00:39:00] theory that’s coming out of physics, which is now being applied to create the next wave of AI generations.

[00:39:07] So I’m a huge fan. There’s a podcast of me online, like a one interview, one hour where I’m nerding about it. So, this is just something that I just find super cool to look into. If you’re into like how neuroscience findings can even create new AI technologies.

[00:39:22] Luke: No, it’s fantastic. I’d love to have you back too, to just dive deep on that.

[00:39:26] That’d be awesome. But yeah, yeah, yeah, yeah. where, where, where can people find you online if they want to follow what you’re doing or, or, or get work that you’re publishing or just follow along?

[00:39:36] Benjamin: I think just on LinkedIn, I’m posting like almost daily. That’s like, that’s also crazy. It’s a lot of pressure.

[00:39:41] Like, you know, I have a great team helping me with the content, but all the content of course gets reviewed by me in the end, again, I’m I don’t like to be online too much. So I try not to be, so everyone, you can come online for my post and then close the apps again, please. And then of course, like, well, if you’re interested, if you have a cool event happening, just book me as a [00:40:00] speaker.

[00:40:00] I love talking about these things for five hours on stage. I have a website and a newsletter where I’m publishing research papers and all these things. So yeah, my website, LinkedIn, and, those are probably my major channels. But then also like, I’m starting actually next Monday, I’m starting Instagram, TikTok, and all those things, which I myself don’t even own.

[00:40:20] So it’s just going to be snippets of my keynote speech speeches. So you see, this is a very honest pitch where I’m saying, open the app for 30 minutes, consume some good content, and then close it again, please. And focus back on the beautiful things in life, which I still think are the physical and real ones, and not only the digital ones.

[00:40:39] Luke: I can’t think of a better way to end up and note to end on within that one. also too, we’ll, we’ll make sure we include your, your info in the show info and everything like that. So people can follow along, but yeah, Benjamin, man, this has been really great. I really appreciate it coming. We’d love to have you back.

[00:40:52] We can go deep on, some of the theory too, but we’d love that. Thank you so much for coming by. I really appreciate it. Thank you very much, Luke. [00:41:00] Thanks for listening to the Brave technologist podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven’t already made the switch to the Brave browser, you can download it for free today at brave.

[00:41:10] com and start using Brave search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • The role ChatGPT has played in helping people overcome concerns around AI, and how we can continue to embrace the inevitable rise of AI
  • How countries differ in their attitudes toward risk, and how this impacts technological innovation
  • The boundaries humans should create with technology in response to decreasing attention spans
  • The profound implications of AI on human identity and society, along with the neuroscience of fear

Guest List

The amazing cast and crew:

  • Benjamin B. Bargetzi - CEO of Bargetzi & Company Group

    Benjamin Bargetzi is a keynote speaker focused on AI and neuroscience. He’s carved a dynamic career path, from leading roles at big tech giants like Google and Amazon, to becoming an international entrepreneur, advisor, and investor. Benjamin researched and studied the human brain at world-leading universities in London, Oxford, Singapore, and Zurich, where his research focused on how the human brain deals with change, risks, and uncertainty.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.