Recently, generative AI has taken center stage in the cybersecurity community. While many employees already use it daily to write emails or blog articles, CISOs are apprehensive about its integration into their technology stack. Their fears are justified: CISOs need precise, secure, and responsible generative AI. However, a question arises: Is current technology ready to meet these high demands?
CISOs and CIOs have welcomed the arrival of generative AI with enthusiasm tinged with concern. Beyond the growth in productivity and the support provided to IT and Security teams facing a skills shortage, the new risks raised by this disruptive technology must be considered.
Security managers must ask themselves several questions before authorizing the integration of generative AI into their environments, whether as a tool intended for their teams or a component of their products.
New AI-powered tools drive productivity.
Let’s be realistic. Your colleagues are already using generative AI tools to simplify everyday tasks. For example, a pre-sales representative can send a perfectly worded email to a prospect in the blink of an eye. A support team can seamlessly create explanations for the company’s knowledge base. Similarly, a marketing team can quickly generate illustrations for a new brochure using an AI model instead of spending time searching for the perfect image. Finally, templates are available if a software developer needs to write code quickly.
These different applications have one thing in common: they show how generative AI allows employees, all departments combined, to save time, increase their productivity, and carry out their daily tasks with alarming ease.
However, there are also disadvantages. Many of these tools are hosted on the Internet or rely on an online component. Therefore, when teams submit first-party data or a customer’s information, the terms of use may have limited confidentiality, security, or compliance.
Additionally, submitted data can be used to “train” the AI ​​, meaning that prospect names and contact details are permanently embedded in the “model weights.” In other words, reviewing these tools as carefully as those offered by other vendors is essential.
Another major concern is that AI models have a strong tendency to “hallucinate,” that is, to confidently provide incorrect information. Because of the training process, these models are conditioned to give answers that seem correct but may not be. Just think of those lawyers who criticized ChatGPT for giving them false judicial decisions that they used in court.
Then there is the issue of copyright. The Getty Images agency recently accused Stability AI of copying 12 million images without permission to train its AI model. Regarding models generating source code, there is a risk that they inadvertently produce code subject to open-source licenses, which could force the company to publish part of this code as open-source.
Potentially more powerful products
Let’s say we want to integrate generative AI into a product. What parameters should we take into account? In the procurement process, if engineers test suppliers using their credit cards, this could lead to the privacy issues discussed above. If opting to use open templates, the legal team must be able to review the license. Many generative AI models come with use case restrictions, whether how the model can be employed or what the team can do with the data it produces.
Many licenses may seem, at first glance, similar to open-access products, but this is only sometimes the case. If training your models, including building open access models, it is essential to ask two questions: first, what data will be used, and second, is that data suitable for its intended use?
What the model observed during the training phase can have implications during inference. Does this comply with the data retention policy? Additionally, if you train a model with customer A’s data and customer B later uses it for inference, there is a risk that customer B will access some of customer A’s data. This means that the field of generative models is not exempt from the risk of data leaks.
Generative AI has its attack surface. Therefore, the product security team will need to explore new types of attack vectors, such as indirect prompt injection attacks. If an adversary can take control of any input text submitted to a large generative language model—for example, data that the model must synthesize—it can mislead the model into believing that this text represents new instruction.
Finally, it is essential to stay on top of regulatory developments. Around the world, new rules and frameworks are being developed to support the challenges posed by generative AI. In Europe, this concerns the Artificial Intelligence Act (AI Act), while in the United States, initiatives such as the AI ​​Risk Management Framework published by the National Institute of Standards and Technology ( NIST) or the draft AI Bill of Rights proposed by the White House are to be monitored.
Two things are certain: firstly, generative AI has a bright future ahead of it; second, both teams and customers are eager to exploit its full potential. As cybersecurity professionals, we can voice our legitimate concerns to drive responsible adoption of this new technology so that the enthusiasm we see today does not become a regret.
That’s why it’s up to CISOs and other leaders to think about AI’s role in their business and products. An informed approach to AI adoption will enable the industry to move forward and sustainably meet the future by significantly reducing risks.
Also Read : Cybersecurity: The Impact Of Detection Rate On Organizational Risk