top of page

Cybersecurity Risks from In-house LLM AI Implementations

The primary business cybersecurity risk when implementing a large language model (LLM) artificial intelligence (AI) system (or any similar system, such as an expert system) is the potential exposure of the data and rule set to unintended audiences.


Aside from a simple ransomware impact, where the data set is copied or encrypted, the potential visibility of the underlying proprietary algorithms and data may create problems. A competitor gaining access, or simply exploiting the data set to learn about the underlying rules and source materials, can be detrimental.


Fortunately, the rules for securing the systems are the same as always, with one exception: the addition of security rules within the AI system to identify when someone is attempting to exploit the system and exfiltrate content or structure.


Securing this is relatively straightforward and can be modeled using red team activities to identify what kinds of rules need to be embedded to alert security and or actively respond to exploitation attempts, and where those rules need to sit. Then the same red team can refine the process by trying to work around the restrictions.


Use of honeypots in these cases is highly recommended, especially if tracking mechanisms can be employed to identify exfiltration efforts.


The rest of it is the same rules we always secure system with. Limit access. Use layered access. Restrict broad roles. Monitor data movement. Monitor integrity. Monitor access channels. Ensure your DR/BCP components support the new requirements. Identify dependent business operations in risk analysis. Make sure your purchasing practices are working correctly so you don't permit shadow IT operations to spring up.


In addition to the cybersecurity concerns, there are ethical and legal risks, as well as operational and reputational risks.


AI is a marvelous tool, but it does require additional cybersecurity considerations to operate safely, and these need to be part of the risk analysis process, as well as any security compliance audits.

0 views0 comments

Recent Posts

See All

And from the business perspective, how does it impact competitive advantage, which is a complex and difficult core business metric. No matter how we measure for competitive advantage and information

When reporting on patch levels for use in metrics as part of risk management there are two areas that are commonly dismissed which are worth additional consideration. We all know that we need to monit

Technical product training, things like Splunk, Cisco, Ubuity, Azure, and similar, should be free. All of it, not just bits and pieces. Not only is it a good marketing tactic, it makes financial sense

Post: Blog2_Post
bottom of page