Experimenting A.I for the creation of Information Security Policies
In today’s rapidly evolving digital landscape, safeguarding sensitive information has become a top priority for organizations across the globe. As technology advances, so do the threats to data security. This is where Artificial Intelligence (AI) steps in as a powerful ally in fortifying information security policies.
The real motivation for this article was to create an A.I/GPT security policy that would entail the do’s and don’ts while using this kind of technology. Of course, I expected to have a well written and complete document that would not only save me time but also give me additional insights on how to use it. In this blog post, I want to explore how AI, in the form of ChatGPT and Google Bard can (or cannot) be harnessed to strengthen and streamline the creation of information security policies.
Before going into the results, let me start by saying that the intent (initially) was not to compare ChatGPT against Bard but the outcome was so bad for both of them that I could not avoid looking and sizing each one’s performance.
Here’s the prompt that I used for ChatGPT and Bard
Write an information security policy for the usage of GPT and artificial intelligence
The chatGPT answer for the prompt was a simple and short policy that could be used to start something but definitely not a serious document. It encompassed essential terms such as roles, responsibilities, compliance, data privacy and incident handling but missed entirely the topics that would discuss what can or cannot be done when using such tools.
The content of the topics that were handled by the document structure clearly uses the words “A.I” and “GPT” as placeholders to be put at the end of each sentence. Yes, I tried to generate similar policies using smaller differences in the prompts, replacing the words GPT and AI for words such as “cats”, “dogs”, “rocks” etc. The result was, with few nuances here and there, almost the same.
Google Bard’s performance
Google Bard for some reason, gives a much bigger response when the prompt was written in portuguese (don’t know why). The answer to the very same prompt was even shorter and standardized than for chatGPT. It included predefined sections encompassing the purpose, scope, principles, responsibilities, and security requirements.
What distinguishes Google Bard is a section labeled “Specific Considerations for…,” which provides valuable details specific to the policy being requested. When prompted with variations involving “cats and dogs,” “rocks,” and “sound technologies,” it offered insights related to factors such as weight, size, sharp edges, behavior, training, and health – THAT was what I was looking for! Specifics!
It’s worth noting that Google Bard appears to use a consistent template for generating information security policies. While it mainly modifies the section that pertains to specifics, it provides a clearer and more relevant response compared to simply replacing words as ChatGPT does.
In conclusion, both ChatGPT and Google Bard (sucks) fall short in generating comprehensive information security policies. While they may be valuable for individuals with limited experience or those seeking a starting point, they are not suitable for creating serious and thorough documents. Google Bard, with its template and specific considerations, exhibits a slightly better performance. It is clear in its approach and offers more relevant information for the intended policy. In contrast, ChatGPT’s performance is less refined and versatile, relying on straightforward word substitution. However, there is room for improvement in both AI models to provide more sophisticated and tailored responses for information security policy creation.