Pop Pulse News

AI Cyber Risk and Mitigation Guidance from the New York State Department of Financial Services


AI Cyber Risk and Mitigation Guidance from the New York State Department of Financial Services

Banks, insurance companies, partnerships, agencies, associations and other entities registered or licensed under the New York State banking, insurance or financial services laws ("Covered Entities") are regulated by the New York State Department of Financial Services ("NYS DFS"). The NYS DFS views its role as establishing certain regulatory minimum standards and providing guidance and resources to assist in these efforts. In that capacity, the NYS DFS has been on the forefront of cybersecurity regulation, initially in 2017 with the Cybersecurity Requirements for Financial Services Companies[1] (the "Cybersecurity Regulation") and most recently in October 2024 with the Industry Letter regarding Cybersecurity Risk Arising from Artificial Intelligence and Strategies to Combat Related Risks (the "AI Cyber Risk Letter").[2]

The AI Cyber Risk Letter provides guidance about how AI use may be assessed and risks mitigated in accordance with the existing Cybersecurity Regulation. As explained at the end of the letter, "[a]s AI continues to evolve, so too will AI-related cybersecurity risks. Detection of, and response to, AI threats will require equally sophisticated countermeasures, which is why it is vital for Covered Entities to review and reevaluate their cybersecurity programs and controls at regular intervals, as required by Part 500." (emphasis added).

As summarized below, the AI threat landscape and possible mitigation measures are the main topics of the AI Cyber Risk Letter. On a positive note, the letter also observes that AI capabilities, such as its ability to quickly perform routine tasks and analyze data, may help mitigate risks. In general, for Covered Entities as well as other entities not governed by NYS DFS regulations, the AI Cyber Risk Letter provides a useful framework to proactively assess and manage AI risks.

Threat actors can leverage AI to create highly personalized and sophisticated content for social engineering, including realistic deepfakes that may convincingly mimic real individuals at little to no cost and without technical expertise. Such feats of social engineering may be more likely to succeed and result in disclosure of sensitive information or actions like wiring money to the threat actor.

AI-Enhanced Cybersecurity Attacks

The power of AI to process and analyze information accelerates the activities of threat actors to penetrate systems and exploit security vulnerabilities. AI also can help threat actors to change malware and ransomware to keep ahead of defensive security controls. Furthermore, according to the AI Cyber Risk Letter, AI lowers the barrier to entry for cybercrimes, so even low skilled threat actors can launch cyberattacks. AI also makes it possible to conduct more attacks more quickly.

Exposure or Theft of Vast Amounts of Nonpublic Information ("NPI")

Covered Entities seeking to deploy AI may maintain or allow access to troves of data. This creates a target-rich environment for threat actors. Data may include personal data, which makes the consequences of a cyberattack more severe and could implicate various data privacy laws. In addition, some data may include biometric data, such as facial images, or fingerprints that potentially could be used to imitate users to gain access to systems.

Increased Vulnerabilities Due to Third-Party, Vendor, and Other Supply Chain Dependencies

AI often requires coordination with third party service providers, which adds another link in the supply chain that could be exploited by threat actors leading to increased exposure to risk.

Mitigation Measures

To mitigate risks, the AI Cyber Risk Letter recommends considering and implementing the following measures:

Previous articleNext article

POPULAR CATEGORY

corporate

8643

tech

9766

entertainment

10777

research

4726

wellness

8439

athletics

11117