Good Policy Guide: AI Policy

Back

We’re seeing a lot of companies put together AI Policies, either as standalone documents or as part of their Acceptable Use Policies, to try to manage their employees use of AI tools such as ChatGPT. The majority of these policies that I have reviewed have had major issues in them.

A lot of those issues are the same old problems we covered in our Akimbo Good Policy Guide, so I’d recommend reviewing that first – but there’s a couple of specifics that I wanted to cover here.

The most important thing before writing an AI policy, is to be clear about your objective – which of the following you are trying to achieve:

  • Banning the use of AI.
  • Allowing the use of AI, with some limitations and accountability requirements.
  • Encouraging the use of AI, with some guardrails so that employees make good decisions on the types of data they share with these tools.

Your stance on the above statements may of course differ depending on the AI tool that is under discussion, and this consideration is made more and more difficult the more tools that vendors wedge AI integrations into.

The most common weakness I come across in AI Policies is ambiguity. For example, policies including statements such as: “Employees should use AI responsibly and must not rely on the output of AI tools for their work.”

The problem with statements like this is that they are subjective, offering no real guidance on what “responsibly” means, or what “rely on” means. It feels restrictive, but it doesn’t draw a clear line on what’s allowed or disallowed.

Banning Specific Tools

Many organisations have specific concerns around certain tools, such as DeepSeek which was developed by an organisation based in China, and are looking to restrict specific AI tools.  However, a common issue with this approach is that we see many tools being created that are effectively just front-ends to other tools. Therefore, for example if you were to ban your staff from accessing DeepSeek.com that may be ineffective as there may be other ways of accessing this tool that do not include utilising it directly via the website. Such as utilising another front-end for the model, or downloading the model and executing it locally. These alternative approaches may not be clearly banned by the policy.

A more effective approach may be to produce a list of pre-approved tools, allowing staff to utilise AI tools only if they have been reviewed by the organisation. Our example policy below includes a statement about how staff members can request tools be added to the pre-approved list, but does not contain guidance on the criteria an AI tool must meet for being approved – that should be handled separately and likely requires process documentation. It’s also a good idea to require that the list of tools is frequently reviewed.

If you create an approved tools list, you may also want to be more detailed than just naming the tool, such as including the URL of the tool. This is due to the large number of malicious tools that mimic legitimate tools.

Banning Specific Uses

Organisations may wish to allow certain AI tools but wish to restrict certain uses of those tools. For example, they may intend to allow marketing images to be AI generated but do not want (or may not legally be allowed) to have AI making certain types of decisions, such as the final hiring decision on candidates or making decisions regarding financial transactions.

This comes up frequently with software developers, who might see great benefits when using AI tools, such as gaining assistance in spotting bugs or developing common functions required within applications. The obvious downside of course is the potential for code that is considered confidential to be shared inappropriately with AI tools.

Banning Data Sharing

It’s common to see organisations include clauses within the AI policies to band sharing certain types of data with AI tools, however often these statements are generic, such as:

“Do not include any sensitive data in submissions sent to AI tools.”

The problem here is that “sensitive” is not well defined. Does this include personal data? Does it include company confidential data? Does it include any data that is under an NDA?

This isn’t just an AI problem, it’s a data management problem. Ideally, an organisation should have a Data Classification Matrix which clearly defines the classification levels in use by the organisation and enforces the use data classification labels, meaning that a policy can state something like:

“Do not include any data protectively marked as company-confidential (or higher) in submissions sent to AI tools.”

However, this will be ineffective if you do not have good data management, classification, and labelling practices in place. Fix that problem first.

Additional things to consider when it comes to AI use in organisations:

You should consider requiring periodic audits of AI tool usage across the organisation. It’s no good having a policy banning AI tools, if no one follows that policy.

Implement an internal approval process for AI tool adoption. AI tooling certainly has advantages but if you want to restrict certain tools from being used having an approved-tool and approved-use policy statements is key.

Provide training to employees on AI tool use, including how these tools can be abused, how they may misuse data, and the weaknesses that exist within these tools – such as their ability to make mistakes, hallucinate, and generally give bad output.

You’ll likely want to monitor the use of AI tools within your organisation wherever possible. You may need to notify your users of this monitoring (although that notification may fit better in another document, such as your Acceptable Use Policy). Just because you enforced rules by means of a policy document doesn’t mean users aren’t breaking the rules.

Getting Started

The following provided statements are designed to be used as a starting point for an organisation looking to allow the use of AI tooling, but with limitations on its use and accountability for users. You’ll want to review this process inline with your standard template and ensure it includes all of the details covered in the Akimbo Good Policy Guide, such as a version control table and review process – but hopefully it’s something to get you started:

Purpose

The purpose of this policy is to provide clear guidelines on the permissible use of AI within the organisation, to ensure responsible and ethical use of AI tools. This includes ensuring that both company data and intellectual property is protected when AI tools are used. Furthermore, this policy aims to mitigate the risks related to bias, misinformation, and errors that may be present within the output of these tools. This policy aims to ensure business objectives are met whilst ensuring that regulatory requirements are followed.

Violation of this policy may result in disciplinary action, up to and including termination.

Scope

This policy applies to all individuals who have access to the organisation’s IT systems, assets, or data. This includes employees (permanent, temporary, and part-time), as well as contractors, agency workers, interns, and volunteers.

This policy is applicable to any use of AI tools on company systems, on company data, or on data trusted to the company, such as any personal or confidential information held by the organisation. This policy applies to all third-party AI services, all services that include AI features, and any internally developed AI tools.

An “AI Tool” is any software, application, or system that uses artificial intelligence to perform tasks such as automation, data analysis, decision-making, content generation, or user interaction. This includes, but is not limited to, AI-powered chatbots, virtual assistants, predictive analytics tools, machine learning models, and generative AI applications. This includes standalone AI platforms (e.g. ChatGPT, Bard, Claude), as well as AI-powered features within applications (e.g. Microsoft Copilot and AI-powered email assistants).

Data Sharing

Third-party AI tools must not be used to process, store, or share, any:

  • data classified above Public on the company’s data classification matrix.
  • data protected by a non-disclosure agreement.
  • personal data, including special category data or criminal offence data.
  • code from internally developed software, or software developed by third-parties on behalf of the company, except those tools explicitly approved for code review.

 Prohibited Uses

AI tooling may not be used to make the final decision in any financial transaction or during the hiring process.

Authorised Tools

The following third-party AI tools are authorised for use by this policy. Any third-party tools not listed below are prohibited.

  • GitHop Codriver
  • Amazing CodeSensei

Whilst AI tooling may be used to generate code for company projects, the developer utilising the AI tool is responsible for the generated code and must full review it to ensure it follows our development guidelines, including reviewing the produced code for bugs and maintainability.

Where AI tools are used to generate content for documentation it is the tool user’s responsibility to review the content for inaccuracies. Additionally, it is the user’s responsibility to ensure that generated content does not violate any compliance or regulation requirement, such as by containing copyrighted material or misusing trademarks.

Beware that there are many fake AI tools available. Malicious actors often create counterfeit AI services that mimic legitimate tools to harvest data, compromise security, or spread misinformation. Only access tools using the link provided in the approved tool list. If you suspect you have accessed a tool that may be fraudulent or harmful, report it immediately to the IT Security Team.

Tool Approval Process

If a staff member requires the use an AI tool not listed in our approved tools list above, they are to submit their requirements using a software request form including the name of the tool, the type of data that will be submitted to the tool, and the business function that it supports. This form must be signed by your line manager and then submitted to the IT team for approval.

AI Transparency and Accountability

Where AI tools are used to generate content where this content may be publicly released (e.g. images for our websites or marketing materials), a record of this must be retained alongside the content. This is to assist in our goals of being transparent in our use of AI, but also assist us in tracking the licences of images used on our marketing materials and website, to ensure no copyrighted material is used without an appropriate licence.

AI Risk Management

All staff members must be trained on the use of approved AI tools, including the potential issues with the use of AI, during onboarding and at least annually.

Have you put our AI Policy guide to good use? We’d love to hear from you. Looking to review your policies, or need help writing them? We’re here to help.

Play Cover Track Title
Track Authors