Back to episodes

Episode 2

Challenging the “Google Adjective” with Brave Search

Josep M. Pujol, Chief of Search at Brave, and Subu Sathyanarayana, Director of Engineering for Brave Search, discuss the history of Internet search and how it’s quickly evolving with the adoption of AI. They share its current limitations and their biggest challenges (noise reduction, relevance and referencing) in their pursuit of improving search through Brave Search.

Transcript

[00:00:00] From privacy concerns to limitless potential, AI is rapidly impacting our evolving society. In this new season of the Brave Technologist podcast, we’re demystifying artificial intelligence, challenging the status quo, and empowering everyday people to embrace the digital revolution. I’m your host, Luke Maltz, VP of Business Operations at Brave Software, makers of the privacy respecting Brave browser.

[00:00:24] and search engine. Now powering AI with the Brave Search API. You’re listening to a new episode of The Brave Technologist. This one features two guests. Both are colleagues of mine at Brave who are working on our Brave Search product. Joseph, Chief of Search at Brave, has been working full time on solving problems with traditional search since 2014.

[00:00:42] He’s also a research scientist who has more than 30 papers, four patents, and a PhD in AI. We’re also joined by Subu, our director of engineering, who works primarily on infrastructure and search ranking. Subu holds a master’s degree in machine learning and spends a lot of time tinkering with transformer models and improving all areas of search.[00:01:00]

[00:01:00] We think you’re going to learn a lot from this episode. We talked about problems with traditional search options like Google and how the role of trust has evolved as the internet has. We also discussed the technicalities of how AI helps improve the quality and relevance of search and the benefits of making the switch to a solution like Brave Search.

[00:01:17] You’re going to learn about their hopes and concerns with AIs and their favorite resources for learning about it. And now, for this week’s episode of the Brave Technologist.

[00:01:26] Subbu and Joseph, welcome to the Brave Technologist podcast. Thanks for having us

[00:01:34] super exciting to be here. It’s always fun talking about AI. It’s such an open ended topic. It’s good to have discussions on this topic. What’s the most exciting thing each of you are working on now?

[00:01:43] So, both Joseph and I work on Brave Search, which is a search engine. Similar to, if you’ve not used Brave Search before, it’s similar to Google and Bing.

[00:01:52] So, you enter a query, and then we show you a bunch of links, which are search results that you can click on and explore. We also [00:02:00] do other stuff on top of it, but this is fundamentally what search engine does, right? And as an assertion in AI is pretty much used all through the stack. So from the point where you, collect data and you want to like put all the data together to doing ranking or doing spell correction, like literally every part of the stack in a search engine ends up using.

[00:02:21] AI in some way, AI is this umbrella term, which sort of means a lot of things to a lot of people. guess nowadays it sort of refers to machine learning and more More of like deep learning and transformer and attention models and so on. Yes So that is something we use as well. It’s not the only thing we use but it’s definitely Sprinkle all over such excellent.

[00:02:42] How about you, Joseph? What’s the most exciting thing you’re working on right now? Well, right now, like kind of living the brave search effort. I mean, that’s kind of like a already like a pretty exciting job. I mean, it’s not every day that one has a chance to. Build a search engine from scratch. I mean, with Suboo, we’ve been working on this problem [00:03:00] for the last nine years.

[00:03:02] So it’s been like a long journey, but, you know, like, you don’t create something that, is, can be as good as Google in a couple of afternoons, right? So it’s a lot of work and still very excited about the project, even though it’s been like nine years, like as a computer scientist, because that’s kind of my background.

[00:03:18] But the research has like all the problems of computer science. In a single product, right? Because it has, of course, AI, because it’s like that’s the name of the day, right? But it’s also like algorithmics. Distributed systems, networking, quality assessment, software engineering, best practices, a large base codes, like a lot of traffic.

[00:03:39] So it’s just all in one, right? So it’s like, it’s the best place to be. Well, and you mentioned nine years, right? and that it’s been this kind of journey that touches on all these points, but what really motivated you to jump right into search when back in 2014, Google had such a presence with search already, and people were kind of using that Google adjective, was there something different that [00:04:00] separated that you wanted to work on differently than what had been out there what kind of motivated you to pick up the search?

[00:04:05] Actually, I got this question a lot when we started like, clicks that later become tailgated, the later become research. is that why, I mean, like search is a soft problem, right? why do you work on it? And in a way, search is a soft problem. If you believe that there is only one solution, right? But there are multiple solutions as we can like proven.

[00:04:25] But more importantly, it’s not just like the product itself. It’s not the search itself, right? Like search in a way is a window to the world, right? And in my opinion, and I think I’m sure that otherwise he wouldn’t have joined, is that it is extremely dangerous that there is a single view.

[00:04:43] Right. And we needed to have an alternative. Right. So it’s like, actually almost like if you want, you could say that these actually a political statement, right? We believe that there should be more diversity and rather than complaining about it because we had the skills, we [00:05:00] start doing it, building it.

[00:05:01] Love it. I love it. What’s been the biggest challenge kind of around building the search engine and trying to bring a more diverse option to market? I think the biggest challenge is sometimes not to get overwhelmed by the problem itself. Because when usually you start thinking about how to build a search engine, it sounds so overwhelming that it puts off a lot of people to even attempt the problem.

[00:05:22] So I think one of the first things that we did, and this was, so I joined a bit later. So Giuseppe started nine years ago. I started maybe two, three years later, but that was something that they’d already started off doing in a really nice way, which is to boil down the problem to its fundamentals. Figure out what the big issue is, and this is something that I’ve learned over my time here as well, which is that the problem is not getting all the data on the web, at least the publicly accessible data.

[00:05:47] This is something which can be done, which was sort of cheap back then. It’s even cheaper right now that most people, like student can do this, right? But use storage on something like AWS and just scrape the entire [00:06:00] web. The problem is when you do search to remove noise from your results. So if you just search for Facebook in this data set, you’re going to find a hundred million pages, if not more, which contain the word Facebook somewhere prominent on the page.

[00:06:13] And for you to finally decide on what the top 10 or 20 results that you want to show to the user is the challenging part. And so that was main problem and we had a very unique solution to the problem compared to traditional search engines. We can go into more details, but the fundamental difference, I would say, I will definitely add more to this, is that Brave Search has been built in the world where a browser exists and a browser is like a powerful access point to the internet for all our users.

[00:06:41] A lot of the search engines before were probably built in an era where either they did not control the browser or the browser was probably not as, widely used, right? So the techniques that they use were slightly different from what we ended up using. No, I just like to double down on this noise reduction topic.

[00:06:58] I mean, I’m old enough to [00:07:00] remember, like, when Google was released, not on google. com, but on a Stanford. edu slash Google. I was kind of like, right away was like, very impressive. And back then, starting the PhD, Page rank paper was all over the news, right? Like the page rank is the secret sauce.

[00:07:16] I mean, and in reality, that was not the secret sauce of why Google was successful, right? Google was successful. Apologies for the Google founders. If they see the podcast, they can contradict me, . But Google was, really successful, not because of page rank. Page rank was the gimmick. Google was successful because they were the first ones who, because of storage became very cheap back then.

[00:07:36] And network working was very cheap for them because they used university resources. They were able to crawl the entire web and then they were able to use the backlinks and the problem of the backlinks is that you have to crawl the entire web or like a large fraction of it to get all the backlinks because the backlinks are not in the page are on all the other pages that point to you and what these backlinks what do they have in the anchor text and the anchor text what [00:08:00] it is is a human created summary of what the page that they point to is about right so that anchor text Was much less noisy than the content of the page itself.

[00:08:13] I see that’s why they actually achieve noise reduction to like a large, large, large degree, because they were the first ones to do that because resources were different and they were using a different approach and they actually managed to do that where AltaVista, because they were using these like centralized system, like with two big servers, I mean, they were able to like crawl only forward, not crawl backwards, you know, in a way, right.

[00:08:37] That’s why they. got like smart because they were like, Oh, we found a place to reduce the noise, which is like a difficult thing. The unique thing about BraveSearch and the prior work that led to BraveSearch is that we basically use human signals. Of course, collected with privacy, preserving techniques, et cetera, et cetera, to actually filter down the noise [00:09:00] even more.

[00:09:00] So that means like query logs, but also like action that people take on the page that basically like creates even like a less noisy signal that we can leverage to actually create. A search engine that is, good enough with like very limited resources, right? Yeah. The problem of this is that this approach cannot be kind of like done in like in a month or two months, right?

[00:09:22] It’s a very slow process, right? Because people will not use you unless you are good. If you are not doing good, your people will not use this. It’s like a very cold start effect and you need the browser to be paired with it, right? So it takes, a long time to kind of like build up the basis. And that’s what we had when we came at Brave.

[00:09:40] It was like mature enough and we kind of launched it. Properly, because before it was not available, well, I mean, you can see the results and judge for yourself. Yeah, no, that’s great. interesting context too, and especially drilling down like from the noise and just to kind of back up to it’s a little bit what you were talking about around gimmicking.

[00:09:57] Right. Like, there’s a lot of noise, [00:10:00] especially since November of last year. when this, whole AI hype cycle kind of started to kick in about impacts that these models and these prompts will have to linear search. Right. Like, and whether it’ll be better or worse, what are your guys takes on this?

[00:10:14] do you see it even being competitive or do you see that search engines will have to become more relevant? Are they two different buckets to you guys at this point? so we’ll probably go after with more comments on that because he’s like lead AI person in the team right now.

[00:10:28] But search will change. Does it mean that a search engine will disappear as we know it? No, that’s not going to happen. So research is not going to anywhere. And Google is not, going anywhere, right? that’s a fact. no matter like how much people think we’ll search point of.

[00:10:44] The way people access search be transformed a little bit. yes, that’s probably the case. But has happened already in the past, right? Like, it seems like we have so many knowledge panels and knowledge graphs and instant answers that actually has changed how people, has changed the query language that [00:11:00] people use.

[00:11:00] Right before it was only keywords, now it’s more like natural language, queries with LLMs become even like full sentences. So it has an effect. I do not think that search engines, uh, are going to be removed from existence and that an AI will take over. No, no, it’s like, it’s gonna be affected like any other field.

[00:11:19] I mean, I can think of like a million fields that will be affected By this model, I mean, I’m, I’m a lot into gaming, right? And imagine like how difficult it was before to create like an open world game. Right? Right. With AIs, now you can actually have real, very realistic, rich, non playable characters.

[00:11:38] Right? Or you can actually generate automatically rows in the metaverse, right? so there is like a million things on computer science and there is a million things outside computer science, right? Sure. Accountants that can automate invoices. This change is here to stay. let’s put it this way.

[00:11:53] But of course it’s overhyped. It’s always the balance, right? I don’t see what do you think? [00:12:00] I know to you guys are also working on something like summarizer in search and things and already impacting visually the search experience. But what are your thoughts on this too? If anything, I think what we actually realizing is that there is an increased demand for search engines, right?

[00:12:15] So the way these LLMs. are fundamentally trained, as you take a bunch of unstructured data from the internet, whoever has been putting out these models, whether, if you take the big ones now, which is, whether it’s llama, falcon, these ones, it’s the companies with enough resources in terms of GPUs, and they use a couple of billion pages of text from the internet, which they, which they collect.

[00:12:36] And they just learn from unstructured information, right? assumption is if you can pass enough of this unstructured information, it basically contains all the information that you would need to answer. But the quick realization was that once you have this, you still need to have a way to first keep this model up to date and also to find smarter ways to train this, because is it better to just throw billions [00:13:00] and billions of pages at it and spend millions of dollars training it?

[00:13:03] Or would you use something like a search engine, which is able to actually find the best pages on the internet. And you don’t really need to deal with all the noise. Again, we get back to the whole noise topic. And also how do you keep it up to date with all the latest stuff that is happening? So in fact, we saw this happen with Some of our competition, like basically typing down their APIs and making it much more expensive to use because people quickly realized that you still need a powerful search engine underneath to make these sort of systems work.

[00:13:31] What we have put out so far. So I would say like the feature which is closest to this is the summarizer. So what the summarizer does is the user does a query, something like, is it safe to drink expired milk? I just came up with a random query. But yeah, you could do a query like this. And the summarizer, we do a regular search.

[00:13:50] And then we look at the results that are out there. And we try to provide a short summary to the user so that they don’t have to actually click through the results and to like, spend time on it. we [00:14:00] believe it’s a sort of a question that can be answered with a simple paragraph, we do it. So that’s, what we have right now.

[00:14:06] That’s awesome. Yeah, no, it seems like one of those helpful features and it’s sort of a lot of this, the AI innovation seems like it’s going to be most impactful where it’s most helpful for people. What things are you all concerned about, if anything around AI? I know there’s a lot, I want to separate this from a lot of the doomsday or AGI doomerism stuff, but what, practical things are you guys concerned about?

[00:14:24] And is there anything that you guys are doing to kind of front run those concerns or, do approaches that are different that help to safeguard against those concerns? We are worried, not so much for that, you know, it’s going to be the end of human race, you say, like, I mean, I could be wrong, right, as Stephen Hawking said that general artificial intelligence could spell the end of human race.

[00:14:44] And who am I to contradict him? But in any case, the problem is, like, on the general part of it, I mean, we are more towards, like, the AI is just a tool, a very powerful tool. Any powerful tool has, like, some repercussions, right? And one of the repercussions that we are concerned [00:15:00] is that, I mean, those tools can, like, be easily weaponized.

[00:15:03] Right, because they are very powerful, right? So you can create like fake content, you can create fake images, fake everything. It’s like that’s… a little bit dangerous. Also, they are like not neutral at all, right? I mean, they are like, the way you build them is really like collecting data.

[00:15:18] And this data is, not free of biases of any sort. Actually, it doesn’t have to be even true to begin with, right? That’s kind of like an issue. On top of that, to kind of like to tame them a little bit, you have people that put guardrails. Right. That’s constrained the A. I. To behave a certain way on this.

[00:15:36] You know where I’m in those people in a way like they are like the holders of the truth, right? Because they decide what can be done. That cannot be done. What is acceptable? What is acceptable? So there is like this powerful thing that is just there that can be abused. Right? If something can be abused, will be abused, right?

[00:15:55] By definition, that is a concern. I’m not so concerned about [00:16:00] people, random people abusing those. I’m more concerned of like institutions or like states abusing those, right? It’s kind of the same as the weapons, you know, like the people who kill people, not the weapons or something like this.

[00:16:13] The guns do not kill people though. It’s kind of like the same without entering into the like, gun rights thing that I know it’s polemic US, not from there, so, but there’s some truth to this assertion, right? The tool is just a tool, right? Like, it depends how, it’s being used. And then in the context of search, it’s particularly like.

[00:16:30] Concerning because, so we’ll talk about the AI summarizer before, right? Right. The AI summarizer kind of, like, gathers from a query, from a question, gathers, like, results from webpages, but can have, like, contradicting views, whatever, like, and they put them in a single blob of text. That suddenly becomes…

[00:16:48] The only truth. Right? And that’s very convenient. As you said, like people like what is convenient, right? But convenience in a way is like a price you pay to like, have a use of freedom or use of [00:17:00] choice. Sure. Right? Because now suddenly, you know, like the answer of. I don’t know, if milk is bad, it’s not going to be Wikipedia, right?

[00:17:07] Which is already like, mostly trustable, but sometimes don’t trust, cannot be trusted. Now it’s going to be AI, which is even less trustable than Wikipedia itself, right? That’s an issue. I mean, we are concerned about that. We are so concerned about that, that we even tried like to have a… We launched, like, a few months ago, perhaps even a year ago, a project called Goggles, like, for the glasses.

[00:17:32] Yeah. To allow people via some rules that they would define, to actually alter the ranking of the search engine. So that everyone could actually… Incorporate their own biases, right? So that you don’t have only one view of the world, but that you can have, like, multiple. And because those biases are explicit, that’s the important part, you choose them, you choose to put the goggles on, so then you would have, like, multiple versions of reality, right?

[00:17:54] So that kind of would be our solution to that, but you know, like not all the problems in the world have a technical [00:18:00] solution. Right. Right. It’s being used by some people, but not by many. Why? Because it’s not convenient. Right. Right. Because people, you know, like they want to, they’re in a rush Or they are not equipped to be able like to make a judgment on every single topic that they have.

[00:18:16] So they have to trust, And this trust before was done on a collective of webpages. So you had some variety. Then it became kind of like knowledge graphs plus wiki page. So we Next is going to be an AI, even reduce it further. That’s like homogenization is never good. For the summarizer itself, one thing that we tried to do from the very beginning was to actually add references.

[00:18:44] to the summary that is generated, right? It’s again, we don’t claim to be perfect in this. we do make mistakes. it is something that we are constantly working on to improve on. But building explainable AI is something which is very important to us because the problem with some of these models, as powerful as they are, [00:19:00] it’s extremely hard to predict what they’re going to generate at any point in time.

[00:19:04] But even for people who work on this on a daily basis, it’s very hard to predict what the output of the model is going to be. different ways in which people try to address this issue, but in our case, it’s more about adding references and make sure that it’s backed up on other webpages that the user can click on and read more on the, yeah, it definitely seems one of the differentiating things seems to be those footnotes when, how the things are sourced and it does seem like it’s kind of missing elsewhere.

[00:19:29] What other ways do you think AI is overestimated and underestimated kind of by the general public? I mean, it’s, an extremely hard question, because we are all learning as time progresses, because even our own expectations are sort of changing over time. So if you told me two years ago that there would be something, a single LLM model, which is able to do all the things that it does right now, I would probably not take it very seriously, but it’s very hard to predict.

[00:19:54] But I think with any such topic, right, which is. It’s not well understood by most people, even [00:20:00] people working in the field. You do end up with polarizing opinions on these things. There’s some people who are completely underestimated and say, Hey, this is just pattern matching, right? Like there’s nothing really to worry about.

[00:20:11] This is just stuff we’ve been doing for 50 years, It just continues to be the same. And you have the other spectrum, the doomsday scenarios. So for me, I’m probably more towards the pattern matching side. I don’t believe in the doomsday scenarios. I mean, the overestimation on AI. I mean, that’s, of course, that, you know, that we are one step away from artificial general intelligence, right?

[00:20:33] I mean, it’s difficult to know if that step will actually be, done someday. Nevertheless, the model that we have right now, they are pretty impressive. if you check the product of transformers with GPT 1, GPT 2, GPT 3, there was like one moment, that quantitative improvements became qualitative.

[00:20:51] And that probably happens around like the release of OpenAI. It’s like, Oh, that thing is impressive. That doesn’t mean that, you know, that, it will be like, growth can be [00:21:00] exponential, right? Can be linear, but it can also be sublinear, right? There’s like no way to know which one it is. It’s very difficult to predict the future.

[00:21:08] For me, like the biggest Overestimation is of AI is kind of like this type that is going to be like the problem and that we need to kind of like act to solve a problem that we do not have, right? Like this, for example, like this moratorium that there was on the AI, it makes little sense, right? Of course, like you can actually ask for like, okay, people should be responsible on what they release.

[00:21:28] You can, you should not be releasing like. Things that are products that are harmful or that they use like a lot of copyrighted material or I mean there are like many things that you could argue but not on the grounds that like this endangers mankind. I mean that’s because it reduces the credibility and it puts a debate Into a place where it shouldn’t be, which is kind of like on the place of, you know, like we should regulate that this will like, you know, like we have to, you know, create a sort of like a universal income because everybody’s going to be without jobs, right?

[00:21:59] [00:22:00] Right. We can discuss about that. Of course, might be actually a very good idea, but that has nothing to do with AI. Right. No matter what Noah Havaric says, I mean, that’s kind of like, two things. I mean, I’ll list myself and I think that’s Shubu and I would probably speak for anyone in base search.

[00:22:14] We are not followers of the singularity religion, right? we believe that AI is just a tool, a very powerful tool. Again, weaponized. It will have dramatic consequences. I mean, don’t get, don’t get it wrong. I would put this AI step that just happened. On the last three years, right? So since the transformer basically just like crystallized to the public opinion with open AI, but it happened like on the last three years.

[00:22:39] I will put this at the same level that the transistor, then the electricity, then the internet, then the steam engine, right? So it’s going to be… Transformational, But as we have had plenty of and mankind are pretty well equipped to deal with it and their underestimation would be like the opposite, right?

[00:22:58] that’s just like [00:23:00] a gimmick or a glorified calculator. No, no, I mean, it’s like, it’s, there is more to it. I mean, because one of the things that at least is more impressing about these models. Or this anyway and models. It’s not so much the output, but the ability to understand the input.

[00:23:16] Right. The understanding, that’s actually very impressive. And the ability to keep the context of a conversation, that is like, I mean, that’s something. I mean, you can do like plenty of things out of that, right? Imagine something kind of like, well, we are all computer science here. If you have a problem and you want to automate it, you write a program that automates it, You can do it yourself and then you create a startup and blah, blah, blah. That’s the difference. But imagine like all the people that do not have programming knowledge, but they have problems that they can be automated with natural language. I was talking before about the accountant that needs to sort out invoices, right?

[00:23:53] He can start like feeding invoices to the system, train it. To actually, like, do a better job at what he is doing, without the need [00:24:00] to actually code a line of code, and that basically has automated that particular part of the job that he does, and, you know, it’s going to be transformational, just not so much because of the output, but because of the way that the input is treated.

[00:24:13] is that where you kind of see there being the most impact from AI in the near term is kind of other like SaaS and other tools integrating AI to get those inputs or?

[00:24:23] Absolutely. Yeah. We’ll probably see specialized AI sort of be like the first thing which basically happens, right? So with these models, I think it’s, we should not underestimate the power of like a good quality training dataset. So if someone has a very specific problem to solve that they have in mind, and they have very good.

[00:24:42] data available for this, it is possible for them to build really sophisticated solutions for this, which would transform the industry in some way. So that is definitely something which would happen for us. Are there ways that we’re able to tackle this? Like kind of with Brave as a browser and uses in the browser, aside from kind of the search engine use [00:25:00] case that are exciting to you guys now, that you guys are thinking about or working on?

[00:25:03] Yes. The browser part of Brave is working very hard on AI to satisfy like, Tasks that are browser driven, which, if you think of it, are most of them ? Right. so of course we’ll start with like, something like, basic and we’ll like build from that. But yeah, we, we do believe that the chatbot interface in a way, but the chatbot interface is one thing, which is very good because of the understanding of their language, of the user.

[00:25:30] So it’s a very good, a natural interface for humans. But the ability to, solve a problem with very little annotated data, that will kind of like be very helpful, like to accomplish tasks. And that’s actually where like the real revolution will happen. It’s not going to be as flashy as, Oh, here I have an Oracle called OpenChat GPT where you can answer.

[00:25:51] And it can answer anything up to 2022, right? That’s not, that’s very impressive. That’s kind of like very well, but what [00:26:00] is the real use of that? I mean, I already have a way, right? Well, the real revolution will be like, well, those like models, like when trained on a specific tasks that you do.

[00:26:10] Might become better than you at doing those types that most likely are going to be repetitive and with very low value added. So you will actually increase the productivity and you will have like more time to do more interesting things, right? For anything that has to be done online, well, the browser is the best place to be.

[00:26:27] Awesome. Yeah, and I know we’re trying to do is kind of like get into break this down for new people that might be kind of getting into this. Are there any resources or knowledge bases or areas that you both recommend for people that are trying to? Maybe they’re developing in other areas something new and interesting to them, or they’re just kind of getting into this.

[00:26:44] Is there any resources that you guys would recommend anybody? Sure. So I would probably take this in two ways. So one is for a more technical audience. So if someone is actually looking to become a machine learning engineer or work on these models, and so on. So I would say, so the book that I found pretty [00:27:00] useful is, it’s called ai, A Modern Approach, which is like a book which has been used as textbooks in the university for a long time, but it’s been revised recently, I think a couple of years ago.

[00:27:09] And they put out like a new version, which takes into account all the latest. It’s deep learning stuff as well. It’s a great introduction to the field. I would also recommend any video or blogs from Andrej Karpathy, who’s a Stanford researcher, I guess now at Tesla, now at OpenAI. I’m not sure. He’s moved around all over, but I think he breaks down the problems into the fundamentals very well.

[00:27:30] So it’s, it’s a very good technical introduction to people who want to get in. If you’re looking for more like a non technical view, but more a philosopher’s viewpoint on things, I would recommend, the Sam Harris podcast. It’s something that I listened to. It also goes into. Related topics. I guess people who are interested in stuff like AGI would end up talking about consciousness or free will and things like that.

[00:27:49] So it’s generally a nice podcast for those things. So that’s, those are the two things I would say. The Leg Friedman podcast, although it touches many other topics, including [00:28:00] Jiu Jitsu, right? But on the last six months, he has had many people that are relevant on different areas, from like AI practitioners, like working companies, from, hardware to software, to investors, like who invest in the area.

[00:28:13] They have been very, very interesting. So anyone who is like non technical and just wants to know, like a little bit, like, What’s going on at the, like, Fiddleman Portsman podcast is, it’s very interesting. And then on the technical part, if you know how to program, like the best place is just like play on HackingFace, HackingFace, they have done, hats off to HackingFace.

[00:28:33] the community would not be the same without them. So that’s kind of like, uh, if you’re really non technical, I just get, I would say that just brave AI, But just get an account on OpenAI and Tropic or here, any of those like, or like get, or download, the Llama version on your laptop and just play with it, but don’t do that question answering just like for the sake of it, or just like, read me this text on the tone [00:29:00] of song.

[00:29:00] Great author. I mean, those are gimmicks, but if you actually, like, start to do, like, small tasks, I have these, and then that, I have that, and then this. And you will see that, you know, that you can actually automate a lot of things. And it’s very, it’s both entertaining with the potential to become useful.

[00:29:18] That well, well said. Speaking of entertaining, just to kind of end on a light note, favorite movie or a book on AI that you could recommend for the audience that motivated you all? I recently tried to get my wife to watch, Ex Machina. it’s a movie that I’ve watched a long time back. I’d forgotten bits of it, but then I watched it again with her.

[00:29:36] I liked the movie. I think it’s very good. My wife found it quite creepy. But that’s, yeah, the point of the movie. But yeah, that’s definitely a movie that I would recommend. The book, I think Super Intelligence by Nick Bostrom. That’s, it’s a very nice read. Again, it’s not that I agree with everything that is said in the book, but it’s a good read, especially I would say for people who are technical.

[00:29:55] It’s always nice to read the other viewpoint on, people who don’t come from the same background on [00:30:00] how they think about this stuff, but, so that’s, that’s a good way. How about you, Joseph? I mean, I would have to go, not so much on, ai, but on time fiction and ai. I think that is kind of more like, I’m always like, more attracted to the apocalyptic nature Alltop features.

[00:30:18] Tro, I, I believe in Strau. I think, well, accelerant it’s like atop future, like multiple generations and how they’re affected by technological change. It’s not something that I, agree on that all that’s gonna happen. I actually, I, not believe that that’s gonna be the , what is gonna happen because it also like goes towards like this singularity concept, but still it’s a very nice.

[00:30:40] read. Oh yeah, and then another like about the singularity, like if that’s non fiction, but right, cross will, the singularity of near. I think I read it when it came out, like I was like in the middle of my PhD and it really got me. I mean, it was like, I was hooked to that book. I believe it’s bullshit.

[00:30:57] The statute of the light is near, right? [00:31:00] But in any case, it’s a very nice read and he has actually very valid points about, you know, like, what people think in linear terms were like, grossly exponential. I would recommend it for anyone who wants to read a good book. Excellent. Well, I really appreciate both you, Joseph and Subu coming on today and sharing a bit about yourselves your takes on AI.

[00:31:18] a pleasure talking to you. Very thankful. Yeah, much appreciated. Thanks guys. Thanks everybody for tuning in. We’ll see you next time. Thanks for listening to the Brave Technologies podcast. To never miss an episode, make sure you hit follow in your podcast app.

[00:31:33] If you haven’t already made the switch to the Brave browser, you can download it for free today at brave. com. And start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.

Show Notes

In this episode of The Brave Technologist Podcast, we discuss:

  • Problems with traditional search options like Google and how the role of “trust” has evolved as the Internet has
  • The potential for AI to empower individuals without programming knowledge to automate tasks using natural language interfaces
  • Ways AI can be abused, emphasizing the need for responsible development
  • Top resources (movies, podcasts and blogs) for learning about AI

Guest List

The amazing cast and crew:

  • Josep M. Pujol - Chief of Search at Brave

    Josep M. Pujol, Chief of Search at Brave - has been working full-time on solving the problems with traditional search since 2014. He’s also a Research Scientist who has more than 30 papers, 4 patents and a PhD on AI!
  • Subu Sathyanarayana - Director of Engineering at Brave

    We’re also joined by Subu Sathyanarayana, Director of Engineering at Brave Search, who works primarily on infrastructure and search ranking. Subu holds a Masters Degree in Machine Learning and spends a lot of his time tinkering with transformer models and improving all areas of search.

About the Show

Shedding light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all!
Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you’re a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together.