The primary business cybersecurity risk when implementing a large language model (LLM) artificial intelligence (AI) system (or any similar system, such as an expert system) is the potential exposure of the data and rule set to unintended audiences.
Aside from a simple ransomware impact, where the data set is copied or encrypted, the potential visibility of the underlying proprietary algorithms and data may create problems. A competitor gaining access, or simply exploiting the data set to learn about the underlying rules and source materials, can be detrimental.
Fortunately, the rules for securing the systems are the same as always, with one exception: the addition of security rules within the AI system to identify when someone is attempting to exploit the system and exfiltrate content or structure.
Securing this is relatively straightforward and can be modeled using red team activities to identify what kinds of rules need to be embedded to alert security and or actively respond to exploitation attempts, and where those rules need to sit. Then the same red team can refine the process by trying to work around the restrictions.
Use of honeypots in these cases is highly recommended, especially if tracking mechanisms can be employed to identify exfiltration efforts.
The rest of it is the same rules we always secure system with. Limit access. Use layered access. Restrict broad roles. Monitor data movement. Monitor integrity. Monitor access channels. Ensure your DR/BCP components support the new requirements. Identify dependent business operations in risk analysis. Make sure your purchasing practices are working correctly so you don't permit shadow IT operations to spring up.
In addition to the cybersecurity concerns, there are ethical and legal risks, as well as operational and reputational risks.
AI is a marvelous tool, but it does require additional cybersecurity considerations to operate safely, and these need to be part of the risk analysis process, as well as any security compliance audits.