Cyber threats are so pervasive that artificial intelligence solutions aren’t going put chief information and security officers out of a job, a financial conference in Toronto was told this week.
Being an infosec leader means “job security forever,” Jean-François Legault, global head of cybersecurity operations at corporate investment bank J.P. Morgan Chase told the SWIFT network’s SIBOS conference.
“Governance remains key. Governance is essential. You can accelerate decision making [with AI], but having a human that can interpret information and take action is essential.”
A person has to look at AI models and determine how to train and protect them, he added. “There is job protection for anyone in cyber for a very long time.”
The topic of the panel was “AI on the front line of cybersecurity.”
Other panelists were Martin Kyle, CISO at Payments Canada; Kris Lovejoy, leader of the global security and resilience practice at Kyndryl Inc; and Reggie Townsend, VP of the data ethics practice at SAS.
The role of the CISO will “fundamentally expand” because of AI and new technologies, Lovejoy predicted, because they will need governance and management to deploy responsibly.
Managing an organization is “a team sport,” she said, with no one role in a position to decide what technology should be used, and where and how to use it. Organizations that are effective in handling new technology are facing that, she said.
Kyle suggested organizations have to look at the two sides of AI. Yes, bad actors can leverage AI to compromise a firm by, for example, using it to write better phishing emails, create deepfake audios and videos. The flip side, he said, is thinking ‘how can my organization embrace AI to counter these threats’. If we’re faster than adversaries in adopting AI, we are going to be able to defend better, he argued.
“So our job is to look at AI as an opportunity in the cyber and the business space. It’s an opportunity to increase our productivity, to increase the value of our products and services.”
Worried that AI solutions used by employees will leak corporate data publicly? It’s easy, Kyle argued, to have policies that limit the use to approved AI solutions that check for data leakage, that don’t mix corporate and external data sets. Worried about ‘AI hallucination?’ Engineer or choose AI systems so their output/recommendations have to be validated by humans.
AI will mean a change in security awareness training of employees, warned Legault. Crooks can use AI, for example, to write grammatically perfect phishing emails, so staff can no longer be trained to just watch for typos. Instead, he said, awareness training needs to focus on what phishing messages want people to do (log into a website, switch money from one account to another and so on) and have employees ask themselves, ‘Does this sound normal in the context of my business practices?’
In the age of AI, organizations have to continuously adapt their business processes to keep up with threat actors, he said — and that’s no longer just the responsibility of cyber teams.
Organizations will also have to train staff about the ethical use of new technologies as part of corporate culture change, said Townsend. That has to impact not only business practices, but also the products and services the organization creates.
For example, SAS has what it calls a “soft governance model” that includes a cross-functional group of executives who focus on the ethical dilemmas AI poses, making sure products have control for AI compliance, and ethics circles – small discussion groups on AI and its challenges.