In an era where cyber threats are escalating in both frequency and sophistication, the disparity between organizations that are resilient to these threats and those that are not, is widening at an alarming rate. This panel delves into the factors contributing to this growing divide, such as budgetary constraints, skill shortages, and the rapid evolution of AI-driven cyber threats.
Ready for our first panel presentation, and I'll ask that the panelists for that presentation please take their seats at this time. So this panel, as everyone's getting settled up here, is titled The Growing Divide: Can AI Bridge the Gap Between Cyber-Resilient and Non-Resilient Organizations?, and it aims to help answer its title question and so much more. So joining us here on the stage are three very knowledgeable panelists who all hold positions that lend themselves very nicely to the topic. We've got Rocco from BDO, who's the Partner & National Cybersecurity Leader, while Giorgio is the Managing Director of Data and Technology for University Pension Plan. In addition, we also welcome Yogesh, who is an IDC analyst focused on security and infrastructure. The conversation today will be facilitated by our moderator, contributing editor for CIO and IDC, Mr. Shane Schick. Shane, the stage is yours. Good
morning, everybody. I don't know if anybody else saw it, but over the last day or so, there was this crazy art exhibit happening just a little ways away from here in the bent way of life, sized dominoes that kind of got toppled down, and you saw that wonderfully satisfying cascading effect from here almost down to Billy Bishop Airport. And if you're on social media, you'll see all these videos and photos of people, giant crowds, standing around, cheering as all these dominoes fell down in perfect order, in sequence. It's the exact opposite of what happens when it security falls down in organizations, of course, nobody is cheering, nobody's clapping, but like dominoes, there is a domino effect. When you know there's a vulnerability in an organization, it can lead to a data breach, which can lead to an escalation, a privilege, and all kinds of terrible things. We know this is worse now than before, in part, because of the proliferation of mobile devices, the decentralization of work and, of course, more recently, the advent of artificial intelligence, which certainly we can use to better guard against threat actors, but also can be used by cyber criminals to make their attacks even more sophisticated. So I'm really pleased to be able to have this discussion today with these gentlemen, a bit about what it takes to build cyber resiliency so you don't end up with that domino effect in your organization. And I'm going to start off by talking to you, Rocco, if I can. So the CIO report shows that 79% of IT leaders believe that there should be an educational partnership between the CIO or the CISO and the executive board of directors. Do you have any insight, I guess, on how that should work, and are there ways that they can manage that relationship to help keep those people ahead of all the myriad cyber risks that organizations are facing? So
It's a great question. First, I do want to pause and I look around the room here and I think about these, IT folks or IT leaders that have one of the toughest jobs, really today, because there's two elements. One, business wants you to move faster, and the technology actually allows you to do it today. So now it becomes a question of, how are we going to manage this effectively? So when we pivot into, you know, trying to find ways to build resilient infrastructures, organizations. Everybody's on the leading edge today. It isn't like, you know, somebody tried it two years ago, and now we're just adopting it. This is real time adoption. So networking events like this, where you get an opportunity to actually talk to your peers and determine what are they doing and learning from their experiences, because they are new experiences. The other piece is, there's a roundtable this afternoon where you get a chance to actually surface what others are doing and understand that more. So there's the business aspect, the demand, there's the technology aspect. It allows you to do it faster. The next piece is being able to have a business level conversation with your peers, your upline, your board of directors. Don't focus only on the IT enablement part, but actually focus on what's the business strategy. How do we align to it? As our business changes, as our services change, how does our profile from a risk standpoint, change, keep it at that level. Try not to talk about the bits and bytes and how the systems are stitched together. Just talk to them about how they enable the business. That
makes sense. Yogesh in my opening remarks, I was talking about how, you know, AI can be used, certainly by cyber criminals, but could also be used by organizations to better protect themselves. But there seems to be a gap between some of the pilots that we've been hearing about this morning, and actually, you know, execution and delivering on that vision. Any advice, I guess, on you know, how organizations can start to close that gap. And start to realize the value of technologies like generative AI for increased cyber resiliency.
That's a great question. And you know, based on what the title of the panel discussion is and where you're hinting, cybersecurity is a big concern. But before I talk about that, you know, I just want to highlight, because Rick and James have done a great job showing us what the potential of AI is, but I'm going to talk a bit about what the current scenario is. So you know, earlier this year, IDC went to the market and we researched some topics around Gen AI adoption and POCs and production, and what are the challenges that organizations are facing. So when we look at the adoption rate, you know, so as per the study in 2023 on an average, organizations worldwide have done around 34 POCs for solutions that include Gen AI features. Now when you look at how many of those actually went into production? Well, the average number worldwide is five. So that's a 15% conversion rate, just very significantly lower than some other technology areas. So when we further probed into you know, what are the leading challenges? Why do some POCs get dropped midway? Why do some POCs, even after being successful in the POC stage, do not make it to the production stage? And in a way, all those challenges sort of converse to data issues, data privacy, accuracy of data, availability of data. These are the primary concern. And this is interesting, because if you look at fundamentally, the power of AI at an enterprise level, actually lies at the intersection of the organization's proprietary information and the Gen AI models and engines and infrastructure. So when you look at this fundamental intersection the IT market eventually is going to provide you with the models, the infrastructures and the engines, all of all of that technology in a way, you know, with products and services. The market is going to provide that. So when you look at what differentiates one organization, which has tasted success with AI compared to others, those who don't, is the proprietary information. So I think getting AI ready, data is a big part of it, and the other part, of course, is the security. So you know, just if I have to explore this area more, so you've talked about AI management and governance and security. The other area is also access. It's a curl it's a corollary area. You know, when you talk about data security, you know, your first secure the access. So, you know, look at these three important vectors. First is identity. You know, if you're talking about the AI models and AI data, the identity of a user or machine, the network identity that is trying to access those models has to be discoverable. So we so the idea here is unified identity and identity management. The second is permission. So the permissions for a user or a user group that is trying to access a model or a date or an output of a Gen AI model, that permissions need to be discoverable. So we're talking about role based access. We're talking about privilege, least privileged access, and the third is sensitivity. For example, if a model's output is a PII information that sensitivity needs to be discoverable by that system. And does this all sound like the zero trust? Well, actually it does. So, you know, I think securing Gen AI to an extent, is not something new. Organizations have been striving towards it. So organizations would be Gen AI ready, and they will eventually have more success with Gen AI if they have if the Gen AI systems are built with zero trust, they have efficient data management security systems. And you know, to be honest, in a scenario like that, when there are no standards or regulatory frameworks, you know, it could be challenging, but then, trust me, for governments around the world, it's a top priority. The EU AI Act is here, the Canadian attempt that is AIDA, Artificial Intelligence and Data Act. It's embedded within the Digital Charter Implementation Act. So Canadian government is working towards that as well. But while you focus on access and data security, we should not forget about monitoring and threat detection response for these Gen AI systems. Because the reason I'm calling this out because many times security is involved as an afterthought. The POCs are done, we're going to production, and then you bring in the security guy, and I have a secure that. So if you understand, there are many use. Says efficiency based use cases, internal objection based use cases, but where the real money is, where your value is when you start using Gen AI on your sensitive data, on the on the data that matters because data is involved, the full spectrum of data security threats are applicable, breaches, ransomware, insider threats. Everything is applicable. So I think how do you secure that? How do you monitor adverse threats, and how do you respond to those threats? I think these are some important considerations which should be done by organizers before they venture into Gen AI POC, because I see that there's a tremendous rush within organizations to do Gen AI POCs, they're actually hunting for problems to solve with Gen AI. I think we need to take a step back and look at these angles before we we take a dip.
Point well taken. Giorgio, we know, and we've heard this, you know, I think for almost decades now, that one of the keys to cyber resiliency is education and making sure that staff are trained on how to recognize threats, how to deal with them, and how to contend with these changes in technology processes. I wanted to ask you about kind of resource gaps. I've heard from other CIOs that there are opportunities to use Gen AI for upskilling and training purposes and things like that. What's been your experience, or what's your perspective, I guess, on the opportunity to use this technology to build that cyber resilience at the employee level. Yeah, thank
you. And we heard it earlier in the conversation about the importance of the staff right. Data is the second most important part of an organization, so our teams, our staff, are very important. And so when you think about training, Gen AI tools give you that instant access to knowledge, right? And when I was reflecting on my career, you look at the old books of Teach Me Visual C# in 21 days, or something like that. Now that's a prompt, right? And and then the context of those prompts can be included as part of your own training. So from a training perspective, it's it's super important. Then when you take it outside, like from an organizational resilience perspective, you can use Gen AI tools to help you train the organization as well, right? And so cyber awareness training, cyber awareness testing, very important terms of building resilience in an organization. And so with can training, it's one thing, Gen AI can help you bring more context to your training, your business content, what matters to that business is super important. You can start to work through that. So, you know, there's, there's that aspect of training. Now, when you talk about, and you think about the resource gaps that we do have, and so a lot of the use cases that we heard earlier talk about how Gen AI or AI tools in general, can augment a team, right? Can give you instant access to information, your own information. Cyber situations, look for anomalies, right, threat hunting as well. So there's a number of use cases, even in in cyber resilience, that you can use Gen AI tools to help you with now finally, when you look at that, you know there is a necessary human in the loop aspect here. And so you do require domain knowledge, not like you pick up a Gen AI tool and you can become an expert in cybersecurity or in resilience, but it is so having that human in the loop, having the domain knowledge, having these tools help you accelerate maybe the ideas, get through some of the early grunt work, per se, and start to get to the value, and get to the outcomes that you're looking for in a way that becomes much more resourceful given limited resources in general. Absolutely.
So I think this is another area where IT leaders are trying to strike a delicate balance right between moving business forward, like we saw some presentations about how Gen AI I could be really transformative for the business, but at the same time trying to stay ahead of business continuity planning, security regulations that we know are emerging and will continue to emerge. Any thoughts from any of you, I guess, on how to help manage that delicate balance? Maybe I'll come back to you, because I'm sure this is a conversation you have with some of the people you work
with. Yeah, I just, I reflect on Yogesh and Giorgio and some of the comments that they made, and when I think about this challenge, for years, we've been saying here are the baseline security controls that every organization needs. And. And enter an AI, and all of a sudden it seems like it's something new, that here are the baseline controls, but the baseline controls are actually the same. The challenge that we're running into is organizations who haven't been able to get those right, to allow AI to accelerate in a secure way. Regulation is another piece. You know, there are certain regulations that we know have been established, and we can anticipate where they are and where they're going. So for us, it's really a matter of, how do we build securely and making sure we're building securely along the way? So it's a big one for me. And the comment about, you know, teaching faster and Gen AI is really there, or AI in general is really there to sort of augment, not to replace, because the institutional knowledge can't be replaced. The context can't be replaced, right? Like it's about humans and how humans can leverage technology now to do things better and faster and more securely. Yogesh,
I want to come back to something you said, which was kind of that urgency, that people are kind of looking for problems. But you know, if you're the IT leader, how do you kind of find a diplomatic way to say we mean we may need to take a step back before we move forward with this technology? Well,
in today's business world, if you ask a technology leader to take a step back, they don't take it in a good light, because, you know, nobody wants to be held back. Because you know your competition is working on that. But when I speak to it and security leaders, I always try to bring them back, back to the basics. What I tell what I tell them that the AI technology is not new, but you know what, what it can do is relatively new, right? The regulations are coming up, and the EU Act is here. Canada's act will come in some time, but you have to understand, it won't be the final draft, right? It will keep changing. So where IT leaders and security leaders, they have this habit of creating very fixed strategies. And this is, this is the way we're going to do things. I just tell them that you control the controllables. You know what you can control is how AI will be used within your organizer. You can identify the use cases. You can identify the business domains that will be leveraging these. You can identify who's going to have access to these AI models. So these are the control levels that you can work on. You have to know that your strategy, once you create it, it's it's shelf life, is probably just going to be six months. Seven months. It will continuously keep updating as regulations update, as technology advancements come in, so I would say, keep it fluid and always, never sorry. Never lose sight of the controllables that you can control, because then, if you try to control everything, if you try to make a strategy based on what the regulations today, it won't be applicable just after six, seven months. So focus on the controllables. And I think you have to take it as you know, an incremental value, addition to your AI alignment, to the business
Giorgio, you're, in a way, on the front lines of this as an IT leader. How are you kind of managing that kind of balance between risk and reward from a Gen AI perspective?
I think to add to these points, working closely with the business partners is super important, right? AI is not just a technology capability, it's a capability that technologists help enable, right? So staying very closely with the business partners, understanding, you know, what are, what are their strategies? And we heard a lot about the importance of connecting that to the business strategy. So having good control is very important. Working through the controllables, being in in the flow of work with the business partners, to understand and work with them is going to be really important, right, advocating for what those risks are going to be right, making sure that they're understood. You know, those are, those are going to be very helpful.
Absolutely. Rocco, just one other question for you. You know, I feel like from, you know, years ago, when I first started covering technology, senior executives might not have known a lot about cybersecurity. Might been the ones to click on the phishing email and that sort of thing. My sense is, with all the stories in the headlines, that that's kind of changing. What's your sense of their their awareness and knowledge of these kinds of threats, and to what extent, if at all, is that influencing their thinking about the use of AI in organizations?
I think, from my experience, we're becoming more aware of the types of threats that we experience every day, and technology executives know this. They're getting much better, and in fact, even folks outside of technology are getting better at uncovering these what we're seeing, though, is a different level of threat sophistication and just in the way the machines are used to perpetrate some of the attacks. Now, how do you start thinking twice about Wait, is that individual real? Is that video real, or is that fake? Now? And how do we start to detect those so the landscape continues to change, and I think the difficulty is, folks in the room myself, like we're struggling to just keep pace. And again, this is where we have to learn from one another's experiences and work together to solve the problem. And I often say, you know, adversaries collaborate, and we don't collaborate enough among this group. Absolutely.
I guess, going into 2025 this is open question, I guess. But any last sort of tips on how people can think around AI and cyber resiliency, anything that they should walk away and do after they leave this event today, and I'll open that to anybody. I'll
start and continue to experiment. I think that we can never stop experimenting. And even with teams like technology, teams that are working through enabling AI, think about how you can use it in your own job. So continue to experiment. There's always an opportunity to leverage it and and I think that's going to be really helpful.
I would just add that, you know, when you talk from a cybersecurity perspective and AI, we have to understand that there is adversarial AI. There are tons of use cases which are already in production within defensive AI. But AI is also the target, right? So more and more it is going to become the CISOs and CIOs responsibility to educate the board about these risks. So I think the more you acquire knowledge in this domain, and the way you want to get it is by meeting people from other organizations and see what they are doing, how they are working with their boards. I think that is key information sharing and knowledge sharing amongst the technology and security leaders, I think that becomes the key thing when they go to the board and sort of try to explain that, you know, why are we doing this? Why we need to spend like, 15, 20, 30% of the overall AI project cost on cybersecurity. Why is that important? So I think the the responsibility of driving that knowledge within the organization is going to fall on the CISO or the CIO the board is not going to go and educate themselves. So I think CISOs need to prepare for that as well. Excellent.
Yeah, from my perspective, it would be, you know, setting the objective, what is the outcome that you want to achieve, that the value that you're going to provide to the business. And that's not necessarily to say that each and every time you're going to be successful, but it's also setting the expectation that this is new for everybody. Here's the outcome we expect we want to achieve. If we don't achieve it, we're going to fail fast and take another run at it, and we're going to take a different approach. So I would say, you know, set kind of the overarching objectives and celebrate your victories. That would be, that would be, my advice. Wonderful.
Well, I realize this is a fairly rapid fire kind of session. We probably didn't completely solve cyber resiliency in the age of AI, but I hope that it gave you at least the beginnings of some ideas on how that conversation between it and that senior, senior leadership team should evolve. And so I'd like to thank Giorgio, Yogesh and Rocco for being a part of this session today. Thank
you. Thank you so much. Shane and panelists, fantastic job. Appreciate your time and all of your expertise. Thank you so much for joining us here.
Transcribed by https://otter.ai