When AI Becomes the Attacker: Most Organizations Aren’t Ready for AI Threats
Speaker: You’re listening to another episode of The Brave Technologist, and this one features another speaker from AI Summit who we had the opportunity to sit down with while we were in New York in December. Dave Chatterjee is a leading authority on cybersecurity strategy, governance, and AI security.
As a creator of the commitment preparedness Discipline CPD framework, he helps organizations worldwide build resilient high performance security cultures. He’s the author of Cybersecurity Readiness, along with the cyber crime theme novel, the Deepfake Conspiracy. He hosts a cybersecurity readiness podcast and is an adjunct associate professor at Duke University.
In this episode, we discussed why DeepFakes and AI enable the text are no longer future risks, but present day realities. The three most underestimated AI risks leaders overlook and how they can move beyond checkbox compliance. And actually treat cybersecurity as a strategic business priority and practical steps individuals can take to reduce exposure to fraud, impersonation, [00:01:00] and data loss.
And now for this week’s episode of The Brave Technologist,
Luke: Dave, welcome to the uh, brave Technologist. How you doing today? Doing amazing. Thanks for inviting me. ~~You’re having a, ~~you’re one of the speakers at the event today. what vision are you trying to champion? Well,
Dave: you know, the first thing that comes to mind when I think about AI and its evolution is we must become very responsible users of the technology.
The technology has a lot of positives, but it also has a lot of negative. During my talk today, I talked about the deepfake threat. It’s not a threat, it’s a reality. Right. But the consequences are devastating. Both at the individual level and at the organizational level. So unless we are careful, we means individuals, organizations, unless we are proactive, unless we are responsible we have trouble ahead of us.
So that’s the primary vision alongside that. I have a framework that I have built, which is gaining great traction in industry [00:02:00] right now. It’s called the Commitment Preparedness Discipline Framework. CPD framework. And that framework is all about proactive cybersecurity governance, and it’s very applicable to leveraging AI effectively.
Mm-hmm. So that’s what I have been talking about during my session. Plus, I did a, immersive activity during the networking hour. ~~And you know, so it’s been a ~~
Luke: Great day at the conference. that’s great. And, and I think people will be uh, like to hear that people are working on these issues too.
I mean, and right now you’re kind of warning executives about the dark side of ai. You know, what’s the most misunderstood risk that leaders still underestimate when it comes to AI enable at detect? So, I’m gonna frame it as
Dave: the three S’s. Okay. Scale, speed, and. Surprise. Okay. Scale is the cascading effect.
Once the network has been penetrated, as we know, the perpetrators are able to escalate the attack, go right across horizontally, and attack other systems and other networks. So, and this is very common when the attack originates from [00:03:00] a vendor system. And the scale is, can be significant.
If you think about critical infrastructure going down, whether it’s the financial systems, whether it’s the electricity grid. Whether it’s the water filtration plant. So therefore these are the things that organizations should recognize and not take it lightly. And then one more thing I I’ll say is the precision of the attack.
Attacks. They’re so well calibrated that they know exactly which vulnerability to target, how to target, and what to get out of it. So that level of precision is only possible with the aid of technology. You know, when humans were developing attack methods, attack mechanisms, they were good, but they were not as sophisticated as they have become today.
Especially think about this deep fake threat. Right. Or deep fake [00:04:00] attacks. We ended a point where it’ll become very difficult to distinguish whether the person talking to me is my son or not. Right. I’ll have to literally ask him questions like, where did we have dinner last? What did we eat for dinner?
You know what I mean? Absolutely. And that’s kind of the, the risk, the challenge that we are facing.
Luke: Yeah. No, I, I think personally, I mean, I worked in advertising, for a good amount of my life. And one of the things that kind of keeps me up at night are thinking about just a lot of these technologies were kind of built to proliferate at scale.
And, and I feel like people are very much asleep on how almost every growth hack can be used in a way as an attack surface in this new landscape. And you know, hopefully, I mean that’s, I think it’s one of the things that’s great about the work you’re doing is that, kind of exposing, making people more aware of what the risks are.
You know, when AI becomes both the attacker and the defender, you know, how does that reshape an organization’s entire risk posture? [00:05:00]
Dave: So, and that makes a lot of sense because you have to counter AI enabled attacks with AI enabled defense systems. Because otherwise it’s hard to match that scale, the speed, the precision I just talked about.
And, you know, I like to use the word that if you want to deal with complexity and variety, you have to match it with equal complexity, equal variety. Right. It’s like a three pin plug point. It won’t fit into a two pin plug point. It’ll need three pins to fit into a wall socket.
Right. Right. So that’s, that’s what I talk about when I talk about complexity. That’s one. The second is more specifically, the organizations must leverage ai. To detect must average leverage AI to continuously adapt. ~~Must level AI ~~to gather organizational intelligence, or I wouldn’t say organization, gather intelligence at different levels.
At the data level, at the system level. So that again, [00:06:00] using AI technology, all this intelligence can be integrated to come up with a coherent story of what these attacks are, how. Are they happening and how do we counter them? So you are kind of learning from the attacks and adapting from there.
Evolving from there to counter the attacks.
So that’s kind of how AI has to be leveraged as a defender and also the proactive attacker. There are tools right now where if they detect detect a potential ransomware attack, they will proactively. Offset the attack by launching a counter attack, and that might serve as a detri room to some of these perpetrators.
Right. So that’s where I, I see technology playing a, playing a big role. I think the adversarial games
are
Luke: gonna hit a new level.
Dave: That is the right word.
Luke: Totally. A new level. Yeah. Yeah. No, it’s really absolutely really interesting. I think um, kind of looking [00:07:00] across all the companies you advise of what Clear signal that an organization is not ready for.
Adapt AI threats. Even if they think that they’re,
Dave: it’s an interesting question because I talk to a lot of companies and I get a chance to gauge where they are in their cybersecurity readiness posture. So this is how I’m gonna put it. Unfortunately I don’t have the data on that yet, so this is a point of view and opinion.
Sure. I think mostly organizations have a reactive posture. They don’t really think. It’s worth the investment to be very proactive to leverage the latest and greatest AI technologies. And you know what? The one thing to keep in mind, just leveraging a technology doesn’t cut it. You also have to have the right kind of talent.
The right kind of processes, the right kind of structure. So a lot of thought has to go into it. Often the CEO, the senior leadership team will, will think or wonder that why should we put in [00:08:00] so much effort towards something that we don’t know. If it will ever happen, and so might as well use that energy effort in more tangible directions.
Uhhuh, which is why our company was formed, let’s say, to sell a certain product or to develop a certain product, we might as well focus everything on that. And then if security happens, we do have a team, we are meeting the compliance guidelines. We will deal with it then. Yeah. It’s kind of, for lack of a better word, an afterthought.
Luke: Yeah.
Dave: I’m a big. Proponent of asking the leadership to look at cybersecurity, to look at AI as a strategic opportunity. The reason I say that is I want the mind shift to change from treating security as something not integral or core to the business and making it integral and core to the business.
I think when leadership recognizes that by effective use of ai, [00:09:00] by effectively securing. Applications. The company can have a strategic edge. Yeah. Can have a competitive edge where the customers recognize that, okay, our data is safe with these guys. They know what they’re doing. They’re putting in money, they’re putting in effort, they’re continuously monitoring their, their leadership team is keyed in, locked in.
That’s kind of what is ideal. Yeah. But on a scale of one to 10, if I were to rank companies~~ I would give them,~~ they’ll be anywhere between four. Mm-hmm.~~ Mm-hmm. Mm-hmm.~~ So that’s kind of what I see. Wow. I to be more specific, if I see a company doesn’t have a CISO in place Right. Or doesn’t have ~~an oi, ~~an AI oversight team in place.
Yeah. Those are~~ indication ~~indications that they don’t take it as seriously as they should. Right. The other day I was having a conversation with a, with a gentleman’s senior leader who said, look, Dave, we don’t have the money to hire a CSO In many ways I understand because they. Their resources are limited, but they have to [00:10:00] offset that by creating an internal team leadership team that functions as a cso, like a CSO committee,
Luke: right?
Dave: So that’s how organizations will have to innovate when they have to deal with budgetary issues without compromising their environment to different forms of attacks.
Luke: How much is a shift in operational security part of that discussion?
Dave: You know, again. I’ll again emphasize a couple of things that I’ve been talking about.
Operational security is good, like all the controls that are out there. If they are implemented well. And they’re, they’re continuously reviewed. They’re improved upon. They are definitely a step in the right direction, but they have to be constantly monitored. Yeah. When the alerts are received, they have.
To be promptly reviewed and responded to, even if the response means we will not do anything about it.
Luke: Right.
Dave: That’s okay. As long as you’ve reviewed it, you’ve logged it. [00:11:00] Now all of this cannot be done by humans. You have to use technology and there are good technologies out there to do it. But there has to be human intervention. There has to be exceptional reporting where the system will direct certain types of alerts that are critical nature to a team. And the team then has to. Quickly process that intelligence. So that’s kind of the way an organization has to evolve their operational readiness.
It’s not like we should scratch everything out and start all over again because now we are dealing with AI threat.
Luke: Right.
Dave: I in fact, feel, go back to your basics. Yeah. The foundations do them well.
Luke: Yeah.
Dave: Like what has happened a lot is organizations tend to have like a checkbox or a checklist mandated by certain regulations.
Certain requirements and they make sure they check them off. Yeah. Now checking them off doesn’t mean that they’re doing, they’re being done well. Right. To give you an example, security training and awareness.
Many organizations will outsource it [00:12:00] to a technology vendor, security vendors, and they will have a portal where employees will go login, they will sign on, they will be part of interactive demos to answer questions, and then they’ll have to sign off saying.
They have had the necessary security training. Which is kind of an industry standard. Right. But in my opinion, that is not good enough. ~~Security has to be, ~~security training has to be customized. Security training must be very interactive. Must be tracked in terms of effectiveness and must happen happen more often, but in small increments.
It, it sounds a lot like hygiene, like a hygienic approach. That is exactly the word. Yeah. What is, does the organization have the right kind of cybersecurity hygiene? And, you know, I, I mentioned my framework, commitment, preparedness, and discipline. Discipline is all about maintaining the hygiene.
Luke: Yeah.
Dave: Yeah.
Luke: I think that’s a great way of looking at it. You know, touched on this in the beginning around deep fake detection [00:13:00] or deception. Yeah. It’s scaling fast. I think everybody, you know, the whole Taylor Swift thing became, hit the zeitgeist for. Pretty quickly on that front, but, and it is something, I mean, these tools are make it so easy to, take anybody’s picture and throw them in there and you know, what is the next evolution of this threat that most executives are not prepared for you.
Dave: The next evolution of the threat is like you are, you are talking to me and you are Luke, and I know you are Luke because you’re sitting in front of me, right? But we go on a Zoom and I’m trying to test if that’s Luke and I’m asking you questions. In different ways to see if that’s really, Luke and Luke’s responses are so genuine that I can’t even figure out if that’s real Luke or not.
Yeah. Like I just shared with you the example of when my son calls and asks for something and I say, Hey, tell me when we had dinner last night, right? What did we have for dinner? And the answer will be so correct. I would assume that that’s really my son. So that’s where things [00:14:00] are. Deep fake threats are becoming highly adaptive.
Highly interactive. Yeah. Like, you know, I was giving an example during my session of the MGM resort breach that happened. And the process that was followed, I shared with them the anatomy of the breach, how the breach happened. And at one stage, these perpetrators, they used LLMs to write down the script contextualize script of how the help desk people interact.
Act with the company’s employees. When somebody calls and says, Hey can you reset my password? Or, my MFA is not working. Can you fix it? They learned exactly how the con that conversation evolves. So when they called cloning an employee and they were receiving questions the way they were answering, these personnel couldn’t detect.
Right. That these were perpetrators, they were attackers were fraudsters [00:15:00] to give. Them, they reset the MFA and these guys were in the system and the breach escalated and the company had to come to a speeching halt operations had to come to a speeching halt for several days. Millions or billions of dollars were lost.
So that’s kind of where deep fake is going. It’s becoming very hard to detect.
Luke: I can tell you, like, I, I think about this all the time. I mean, my producer Sam was telling me we’ve had a hundred episodes, right? That’s. Episodes of me saying horribly, every word that I could put through my head that’s out there that I think about this actually, probably too much sometimes.
But what do you think is, I mean, like, is a good way of shaping the problem, right?
yeah.
What are the solutions, what are the practical ways that companies or even individuals can, or, or just even more broadly, right, as an industry that we can help to authenticate what’s real around these things or, proof it.
maybe go little detail on that.
Dave: So, like I said, you know, a company must basically [00:16:00] decide on a framework whether they adopt my framework, which is the commitment, preparedness, and discipline framework or any other framework. There are several great frameworks out there, but the important decision the company has to take is decide on what framework works best for them and then do certain things.
There are hundreds of guidance and that is overwhelming. So therefore I like to keep it kind of simple. Yeah. So when I say commitment, what does that mean? Senior leadership has to buy in that this is an issue and they have to get involved. They have to promote oversight. They have to understand what the threats are, and they have to have a team in place that’ll keep them informed in terms of how their defense system, where it is, what are the risks?
And what are they doing to reduce the risks? So there has to be some very informed decision making going on. No company can ever secure it and expect to be [00:17:00] immune. Right. That’s not possible. Right. But they get a lot of credit. If there is a true effort to try and mitigate it, try and minimize it.
And commitment also has to come from ground. From the ground up. It’s not enough for the leadership. Team to engage in cybersecurity, governance, AI leveraging activities, but also motivate the, and that, and that motivation happens through cross-functional involvement. Get the unit heads, get the domain experts to participate in these teams and committees.
I’m talking about like, get them to contribute how they feel their domains can be secure. Whether it’s an AI threat or any other threat, the principles are still the same. Cross-functional collaboration establish very rigorous processes processes, such as I talked about continuous monitoring backed by prompt logging and prompt addressing the, the alerts. [00:18:00] Those processes should be so etched into the organizational fabric that even if there is a turnover, the company is still operating at a certain level of. Mature security governance. And finally, from a preparedness standpoint, as I said earlier, being proactive and there are lots of tools out there.
Yeah, which allow companies to capture intelligence from systems, from models, paint a holistic story of what the threats are, how to counter them, and then those tools themselves will counter those attacks, counter those threats, provide regular feedback. So you need to, companies need to invest in those technologies, but they have to be also backed by human intervention.
I was at a engaged in a lab, a interactive session where they gave us a scenario and they talked about how do you mitigate [00:19:00] AI related alerts, risk alerts how do you detect that they are real alerts and not fake alerts. Yeah. And once again, you need the help of technology. But if those alerts relate to critical areas of the company, then you just can’t rely on the technology deciding whether it’s real or fake.
Right. It it requires another level of oversight. Yeah. And that’s where the human intervention comes in. So yes, you counter AI with ai, but you need the human involvement, right. Where you have to verify. So, you know, we are familiar with that phrase. Trust but verify. Yeah.
Luke: Yeah.
Dave: Same prop. Same here though.
There are people out there who are, who blindly follow, believe in the zero trust framework. Right. Trust nothing, verify before even you take the next step. And I can’t fault them for that. Yeah. I think that’s a fair approach.
Luke: Yeah. That’s great. You know, you often emphasize the convergence of [00:20:00] cybersecurity and business continuity.
I think you just kind of touched on that a little bit. Yeah. How should companies build AI resilience?
Dave: So to. Be very specific. Give you some specific pointers here. One is, you know, threat modeling. Yeah. Using AI systems. The other would be building fail safe and contingency modes, because every company has to be prepared that there’s gonna be a breach.
Now what do we do? What’s our backup operation from? Where do we, we come back up and running as soon as possible. Yeah. A company has to be prepared for that, whether that’s having a hot site or a cold site, whatever that might be. Those. Those actions must be part of a playbook that has been practiced and rehearsed.
The third, I would say validating the data pipelines. You know, it’s funny for the last 30, 40, 50, even more years, we’ve been familiar with that word or phrase, garbage in, garbage out. Yes. It couldn’t be more relevant. Today [00:21:00] when we have these AI models that are being trained, that are being tested, if the data that is being used.
Used is not high quality. Right. If the data has been used, has been polluted, has been poisoned, then you’re gonna get bad results. So that’s all the more reason why there has to be constant verification and validation before the data enters the processing area. The third, I would say training the workforce.
And that’s a challenge because the workforce turnover is high. Yeah. But you have to find ways to. Encourage them to learn as much as they can, even if it’s not in their domain of interest or expertise. Right. They have to develop the, that security mindset I talk about. Yeah. So when any decision is taken or when any action is being taken, they’re thinking, what’s the security implication?
Is this right for the company? If not, who should be informed for, for guidance? So that’s how you start shaping. You [00:22:00] start evolving cybersecurity readiness.
Luke: I think it’s. Fantastic too, because a lot of these things that you know, an employee would be mindful of will be beneficial in your everyday life too.
Everybody’s getting, I don’t know, I get calls 50 times a day robocallers and, and things and you know, just kind of, I think people, people want to know. They just don’t know what to look for. So I think, I’m gonna,
Dave: I’m gonna share with listeners some basic preparedness. Yeah. From an individual standpoint.
Yeah. Yeah. Anytime there is a call, there’s a text that involves a financial transaction. You want to call back? Yeah. You want to visit the financial institution to verify that what was being requested of you is worth doing. Yeah. I’ll give you a more specific example. I received a call from the local police department.
The guy calls me and says that. You have been a victim of an ID theft. And he sounded very genuine, just like a, any officer. Sure. [00:23:00] I immediately stopped him and I said, officer, I appreciate that. Can I have a callback number? Can I have your badge number? And he was very nonchalant about it.
Didn’t fluster him. He said, yeah, sure. So he gave me the number from, he was calling, I don’t think he gave me a badge number. Then I called the police department in my area and they confirmed that the number. He was calling from was their number. But the name that he gave me, he did give me a name. my wife said that they, that name is from a character in a movie.
I said, well, you know, people can have similar names. Sure. So I can’t question that. So then when I checked with the police department, they said, we don’t have anyone by that name. So this guy came very close to coming across as genuine. He sounded stern and authoritative. Yeah. Somebody else might have fallen for it.
Oh yeah. But like you said, I do this so much, right? That I should be a little more prepared than others, but even I can be be compromised or I [00:24:00] can, I can fail. So, as I, I was saying, saying that one is to constantly validate. Second is before you put anything out there about yourself or your activities, think twice.
Because there is no such thing as privacy. Once the data. Has left your system and has entered another system as much as the vendors might promise you. So if people are more conscious about what they’re sharing, how they’re sharing, and finally prioritize. Just like companies have to prioritize what’s most critical or important for them.
Similarly, individuals need to prioritize. ‘cause that’s when you can decide, okay, I’m leaving home for the holiday if my house gets burned down. On, I have insurance, they’ll probably rebuild the house, but I may never be able to recover these artifacts that have tremendous sentimental value. So if that’s the case, what have you done about it? Right. Have you put them away in a safe deposit vault somewhere? [00:25:00] So that is the level of planning backed by good execution. That will keep us reasonably safe because ultimately it’s about the data that. Is getting compromised. Right? So if you can make sure that even if the data got compromised, your future, I mean you means could be the individual, could be the organization, your future is not harmed.
You are generally okay. Yeah. So it requires an element of security, paranoia. Yeah. It requires an element of skepticism. Don’t just believe just because you see it, right. You see an email promising you a certain amount of money. Asking you to follow a link, just stop. Right. And there’s no need to follow links.
Yeah. And don’t need to even open the email if you think that you don’t know who the sender is or if the subject header sounds too good to be true. It sounds a good true. Right? Just so these are basics. Yeah. But the question is, will people [00:26:00] follow that?
Luke: Right. So Yeah, no, that’s, that’s great. And I think so many of these things are, things people think about with from the analog world in a practical way, like home security and things like that.
We’re getting into a different space now, but we’ve touched on the present on, on, you know, if you imagine a cybersecurity organization of 2030. Yeah. What roles or capabilities exist that most companies don’t even have on their org charts?
Dave: It’s a great question. In fact you know, as I was thinking about, you know, this podcast, I was thinking about some future roles that I see.
I listed a few of them. One, of course, is kind of obvious, a Chief AI security officer. Another could be a AI model. Assurance lead. Then there could be a synthetic identity analyst who’s focusing on syn identity verification. Then there could be a AI behavior auditor. Yeah, a digital trust architect.
So the fundamental roles of an auditor, an assurance lead, an architect, a security officer, [00:27:00] those roles don’t go away. They have just evolved and taken on the next level dimension. Thanks to ai. You know, I’ll tell you something, we can get all fancy and sophisticated about jargons and new frameworks, but ultimately how you protect something, the fundamentals are the same, right?
Whether it’s protecting it from security attacks, protecting from physical attacks, the fundamentals are
Luke: the thing. Yeah,
Dave: yeah.
Luke: Preparedness, hygiene, consciousness. Lots of these things are, are very relatable and,
Dave: and you know, that’s the. Best way of becoming cyber aware. Yeah. If you, put out there a whole bunch of acronyms, fancy words, complex jargons, people tune you out.
Yes. But if you tell them, Hey, commitment, preparedness, discipline, three things. If you focus on and if you do them well, you’ll be pretty good. Oh yeah. I think that is more relatable.
Luke: Yeah. Totally. [00:28:00] Totally. I think that’s so, so you’ve interviewed a lot of experts on your cyber. Security Readiness podcast.
What’s one surprising insight from those conversations that fundamentally have changed your view?
Dave: One surprising insight was when a security officer told me, Dave, my experience has been working in different organizations. We are still very reactive in our approach. And I must tell you, I was more idealistic at that point.
I thought organizations really had their act together because we have, at least the US they’ve evolved significantly in terms of providing guidance frame. Works recommendations, critical success factors. I would assume that organizations were following it, doing it, following it well, but it came through over and over again.
Yeah. From the episodes that I, I have done and I’m also getting to my a hundred episodes shortly.
Luke: Hey, congrats.
Dave: Thank you. that, yeah. Reactive. And the other thing that I mentioned to you earlier [00:29:00] is it’s more a check the box kind of an. Approach. It’s not like, let’s go above and beyond.
And I think to some extent that has to do with we, again, when I say we, I mean organizations, the leaders, they respond better when there is a official regulatory mandate. Yeah. With serious consequences. Yeah. Such as going to jail. And my thought is that I wish we didn’t need those. We were more proactive and did the best we could given our resources so we could properly.
Secure our ecosystem. Yeah. So that’s kind of, those were some of the takeaways that’s coming through that. Yeah. There are lots of great tools out there, lots of great recommendations. But the adoption, the implementation is not as effective as it could be or it should be.
Luke: Yeah. It seems so similar to kind of like, what you see with like health and wellness and you know, in these types of things where sometimes having a scare of your own will will shock you into.
into taking it more seriously. But I [00:30:00] think I’ve got a couple of rapid fire questions Sure. I wanna hit you with Sure. Um, First off um, what’s one piece of bad security advice you hear way too often
Dave: that AI tools are secure by default? nothing against vendors.
I’m at a vendor box. Right? Right. And they, they, talk about having a, a browser, which is and they emphasize privacy. I want to respect that, and I, I respect the innovation. But as a consumer, I have to verify. Yeah. I just cannot blindly accept the tools for what they are proposed to be. Yep, yep. So that is one myth.
Yep. That is out there. That Okay. Just because it, it’s, they are promising these solutions, these miracles will happen. Doesn’t mean they will.
Luke: Right. No, I think that’s a great, that’s a great point. I mean, especially you mentioned privacy and we, we talked about. Here. And in a space like the us we’re moving out of the stage of where people are questioning whether people care about privacy.
And now, okay, you’re getting competitive landscape around that. And privacy is really, a lot of [00:31:00] bigger companies are kind of defining the term, right. uh, without, in the absence of a regular telling you what it’s, so I think that’s a very, very good point you brought up there. what’s the most underestimated AI threat today?
Dave: You know, I think, the poisoning of the, of the models. the models that are used to process intelligence. Yeah. They are getting poisoned. So at initially the thinking was the data is getting poisoned. That needs to be the focus area. Then you have API poisoning, that’s another area, but model poisoning.
And it’s evolving. So even if the model evolves the perpetrators are able to adapt and even curve. Up those models. So that’s why you need continuous monitoring, continuous validation. Continuous verification. You cannot assume Yeah, my, my tool is working just fine producing, get grid results.
I don’t have to worry about it. No, you have to constantly keep checking Yeah. At the model level, at the output level, at the data level to make sure it, the model is still working the way it is supposed to.
Luke: Yeah. That’s great. And then what’s one emerging AI [00:32:00] capability that excites you?
Dave: Of course, you know, there’s a. There’s a lot of talk about agentic ai.
Luke: Yeah.
Dave: Agentic AI is what is, when AI is doing literally everything, they are not only reacting to threats, they’re fixing threats. So is you are kind of automated intelligence Yeah. But automated intelligence that is also constantly learning.
Yeah. You know, we’ve always had intelligent agents. Agent take AI takes it to a different level. Mm-hmm. But once again, I’m still a fan of the human though. I read the other day that supposedly in 20 years time, humans will not have to work anymore. And you know, I don’t know if those theories are, one can dream, how one can dream.
But I feel that once again, I’m gonna emphasize the human involvement. The human intervention. Yeah. But you know, that, that requires the humans to be trained. Yeah. To be empowered. Yeah. You know, we are at a AI conference today. It’s great to see people of all age groups, of all demographics. Seniors, young folks, they’re all [00:33:00] engaging, all participating.
That’s exactly the spirit, because this technology more than anything else, is telling us that it’s never too late to learn. Yeah. It’s never too early to learn and there’s always, there’s constantly something to learn. Yep. So learning should never, I used to say that as a professor.
Yeah. But now it’s become easier because I can point to ai. And say, if AI is gonna continuously learn, and if you choose to say that, okay, I have my college degree, my learning comes to an end, what does that say about you? How soon do you think you will get booted out of the market? Yeah, yeah. By an AI tool.
Absolutely. So just to keep pace with ai, you have to learn. You learn constantly.
Luke: Yep. Yep. And what’s one book every leader should read about digital risk?
Dave: Well, I’m gonna talk about my two books. Yeah, please do. Please do. So, I published the first one, which is the cybersecurity Readiness. A holistic and high performance approach.
That is where I talk about the framework, the commitment, preparedness, and discipline framework. It was also [00:34:00] written to raise awareness. I didn’t make it too technical. Uhhuh so people of reasonable awareness yeah. Would be able to process it. And it was written, keeping the practitioner in mind.
And I use that framework to conduct discoveries of readiness, state of readiness, so then I can appropriately recommend advice. His clients. So that’s the first book recently end of September I published the second book. Totally. A different genre. It’s a novel, it’s a tech thriller. Really. It’s called The Deep Fake Conspiracy.
Oh, okay. And you know, it was, I, I recognize that I’m unlikely to get global attention of the masses by writing books meant for practitioners. Right. I have to convey my concern. About these threats and how to deal with them through a compelling story. Yeah. How was that switching from, you know, nonfiction to fiction?
That’s a great question and I get asked that a lot. [00:35:00] Both had have their unique challenges. The first book, which was more of a scholarly kind of a work, is extremely well referenced. You have to have a mastery of the literature to be able to write that because you know it’s gonna be validated. Know it’s gonna be good through a review process.
So that has its own challenges. Unless you are very familiar with the literature, unless you’re very familiar with what happens in the world of practice. And I’ve always taken pride in trying to play a balancing act, being a prof, but also engaging with practice. Yeah. And that’s, that’s what came out in that book, this where you are basically making up a story.
Right? The challenge was how do you talk about threats and. The threat. That’s the fascinating part about this book. There is this celebrity you know, the person who inspires others through her uh, Instagram and other social media [00:36:00] sites. She becomes a victim of a deep fake threat, deep fake attack.
Completely caught her unawares. She had this massive following and. Before you know it, she was getting through, she didn’t know what hit, her, and she was totally unprepared. And that’s when she goes to her friend and the friend and another guy get together and they try to resolve the get to the bottom of the problem.
They reach out to the legal folks, they reach out to law enforcement, but they recognized that they had to do their part to bring the perpetrators, to book the books and to make that happen in a. One had to craft a plot, one had to come up with how the attack was conducted, wanted to also describe how the attack was thwarted in a non-technical and interesting way.
Right, right. That was the challenge. Yeah,
Luke: yeah. Yeah.
Dave: Creativity and imagination. [00:37:00] So it, it had, it went through several iterations. Yeah. Where I worked with a team. Of course, you know, you, you don’t produce a good book just by yourself.
Luke: Right, right.
Dave: So I had a team that worked with me. And we went back and forth.
We played to our strengths. And finally the good news is it’s being received very well. Did you enjoy it? Book writing is always very painful. Yeah. Yeah. Very stressful. Yeah. But I’m feel happy now when I hear good things about the book. I think that’s true for any, any creative work. Yes. I will admit that doing it is fun, is exciting, but it’s also very stressful and you don’t know what to expect.
Luke: Right.
Dave: Because that, that is a job. That I was not comfortable.
Luke: Right. It’s outta your wheel. Totally out of my wheelhouse. Yeah,
Dave: yeah, yeah. But you know, that’s something about me is I like to get beyond my comfort zone
Luke: Yeah.
Dave: And kind of reinvent myself. Yeah. Which is what I’ve done through, done throughout my life and my career.
Yeah. This is almost my third career. I started off as a chartered accountant from India. [00:38:00] Then I got into the world of management information systems, and now I’m focused on cybersecurity ai. Governance. So I’ve just felt the need to evolve and adapt because that’s what I felt the greater need was.
Yeah. Because in my world, my work is not satisfying if that doesn’t have a positive impact on the community. And I keep it simple. I said, whatever little I can do to make the world a safer place will make me happy.
Luke: Well, and I think it’s I think it’s also like a lot. What we’ve talked about here has been around you know, how do you, get people to, to change habits or, yeah.
You know, and, a lot of this is also how do you reach people, right? Reach people that might not ever think about security, right? and, and I think doing it through a a fictional way is fantastic.
Dave: Well, let’s hope that it it catches fire and great things come out of it.
Luke: Thank you so much for your time today.
Where can people find you if they wanna find yourself? I also want you to [00:39:00] plug your podcast too, so.
Dave: Sure. Well, thank you so much. So, I have the Cybersecurity Readiness podcast series. The best way of finding me is if they Google me. Okay. And I have a website. It’s dcha.com. Okay.
D-C-H-A-T-T e.com. And if they go to the website, they’ll find links to all my social media platforms. Fantastic. And as far as the book is, both the books, you know, they just have to Google it and it’ll come up on Amazon. So they just need to have the book titles, so it shouldn’t be very hard to. Reach out to me and I, I always am open to anyone who wants to reach out to me and talk to me about threats, how to protect themselves from, because that’s part of my offering, part of my service to the community.
And I’d never shy away from that. we are all do, doing all this work. Yes, there is the compensation that’s important, profits that are important, but at a much bigger level, it’s all about making all of. Us more safe, more secure. So whatever I can do in [00:40:00] my own way to help realize that goal, that mission, that would make me very happy.
Luke: That’s fantastic. I can’t think of a better note to end on. Well, Dave, thank you so much for making the time today. Really appreciate it. It was a great conversation and I’d love to have you back talk about the next thing you do.
Dave: Thank you so much. Right
Luke: on.

