Pop Pulse News

Generative AI Risks: Vulnerabilities in AI Models, Data Privacy, and Corporate IP Exposure


Generative AI Risks: Vulnerabilities in AI Models, Data Privacy, and Corporate IP Exposure

Can machines think? When the great British mathematician and computing pioneer Alan Turing first asked this question in his now-famous 1950 paper, the idea that inanimate computers could think like humans met only skepticism. But today, just a few decades later, numerous developments and innovations in the field of artificial intelligence (AI) have raised hopes that thinking machines may, in fact, soon become a reality.

One particular AI innovation that has generated the most excitement is generative AI (gen AI). I want to clarify that today's gen AI tools are not fully thinking machines. However, these tools help make computers intelligent enough to understand user inputs and respond with helpful output in natural, human-like language. This ability facilitates innovations in numerous domains, encouraging many modern businesses to adopt the technology enthusiastically.

That said, this enthusiasm comes at a price. The use of gen AI is fraught with numerous risks, so for an organization to make the most of its transformative potential, it needs to be aware of these risks. Effective strategies may also be required to mitigate the risks and minimize their potential impact.

AI models are the foundation of generative AI programs. These mathematical frameworks learn from vast quantities of "training" data to discern patterns within the data and to perform complex tasks that typically require human intelligence.

Unfortunately, most AI models include security vulnerabilities affecting performance and reliability. Adversaries use these vulnerabilities to leverage the models for their own goals, such as taking over enterprise systems, stealing sensitive data, executing malicious code, and even engaging in industrial espionage.

AI security research firms regularly publicize lists of these vulnerabilities to increase awareness among organizations. For instance, Protect AI's November 2023 vulnerability report lists local file inclusion, remote code execution, MLflow arbitrary file write, and ray remote code execution as the most impactful vulnerabilities across the AI/ML supply chain.

However, these weaknesses are just the tip of a vast iceberg. More profound research must be done to build businesses' and individuals' AI risk awareness.

Data security and privacy concerns are among the top concerns of a significant percentage (31%) of business owners using AI. Furthermore, the number of organizations considering the "personal/individual privacy" risk of using gen AI has increased from 39% in 2023 to 43% in 2024.

These concerns are valid, as many AI models lack sufficient privacy measures. These gaps allow attackers to execute highly damaging extraction attacks, copying the model to steal valuable information or inference attacks. They would then analyze the model's prediction to infer sensitive attributes of the training data. Either way, they can compromise the confidentiality of the training data and access sensitive information that individuals want to keep private.

The exposure or loss of intellectual property (IP) may adversely affect the organization in many ways: loss of potential revenues, hampered innovation, a weakened competitive advantage, increased legal costs, and reputational or brand damage. Generative AI may be the cause of all of these problems.

Generative AI models are trained on large volumes of data. If this data includes copyrighted materials or confidential business information, or if there is no clear trail of the data source or collection process, they may encounter severe IP exposure or infringement cases.

Unfortunately, insurance may not protect them from these risks. One recent report revealed that even though IP is increasingly vulnerable to IP infringement and security breaches, it rarely gets the same level of insurance protection as tangible physical assets, leaving businesses at a very high risk of IP exposure.

The risks I have highlighted above are no longer theoretical or limited to "someday but not today." Clever attackers have already developed many types of offensive campaigns targeting AI models, systems, and data. For example, they may execute data poisoning attacks, manipulating the AI model's training data. Hence, it produces favorable results for the attacker but causes widespread chaos or damage to genuine users.

Smart adversaries have also learned to embed malicious code into pre-trained ML models to launch ransomware or phishing attacks or to move laterally across corporate networks. A recent example: In February 2024, researchers discovered that attackers could leverage malicious ML models on the Hugging Face AI platform to inject malicious code onto user machines despite Hugging Face's built-in security protections.

They can also use unsecured API access tokens to access widely used large language model (LLM) repositories to poison training data, steal models, and execute malicious cyberattacks. To make matters worse, these threats may evade detection by one's cybersecurity solutions, increasing the probability of attack with severe implications for the business, including IP theft, AI supply chain compromise, financial losses, and reputational damage.

Given this risk and danger, I suggest businesses and individuals take some steps. Step one is to mitigate the generative AI risks facing the organization by understanding these risks. Next, categorize risks by severity and, accordingly, implement measures that could help mitigate them best. I recommend employing anomaly detection to identify suspicious patterns in data, scanning open-source models for malicious code before use, and using cryptographic signatures to prevent model tampering.

Adopting a "shift left" testing approach, such as conducting risk reviews and testing early in the gen AI development lifecycle, can mitigate security risks. In addition, using secure model training environments and pre-verifying the integrity of training data sources can improve model outputs regarding relevance and accuracy.

I also recommend two other helpful strategies: develop guidelines for responsible AI use and institute a steering council to make decisions about responsible AI governance. An organization can also ensure responsible, ethical, and secure AI use by implementing robust data security measures, continuously monitoring AI models, enacting rigorous checks to filter input data, and training employees to understand the risks of sharing sensitive data with public AI models.

Undoubtedly, generative AI is one of the most revolutionary technologies of the modern digital era. However, the increasing adoption of gen AI tools exposes organizations to numerous security risks.

The good news is that mitigating the mentioned risks is possible. The key is implementing proactive and robust governance, risk management, and responsible AI practices. By doing so, an organization can avoid the common security risks affecting gen AI tools and garner all the benefits of this incredible technology.

Vihar Garlapati is the director of technology at Optum, a leading healthcare services company. With over 17 years of experience in the IT industry, he has a proven track record of deploying sophisticated security solutions across the healthcare and financial sectors.

Garlapati's expertise includes identity access management (IAM) and the implementation of on-premises, cloud, and software-as-a-service (SaaS)-based services, which significantly enhance enterprise risk management, compliance, and productivity. His skills in integrating systems and automating processes have driven organizational growth by over 100%. He has also been instrumental in helping companies achieve critical certifications such as SOC 2 and HITRUST, which are essential for healthcare organizations that aim to attract and retain customers.

Previous articleNext article

POPULAR CATEGORY

corporate

6678

tech

7585

entertainment

8231

research

3417

wellness

6312

athletics

8369