Why your organisation shouldn't have an AI policy

Why your organisation shouldn't have an AI policy

Most organisations, either private, public or focused on social impact, struggle with their tech stack. There are always too many problems to be solved and too many platforms that promise to solve them, usually without success. Similar projects are started with different management tools, and nobody really knows who should decide on an organisation-wide one. When there is finally a decision, a few people don't agree and continue to use their preferred ones (usually with a free, personal account) on the side, creating a security and privacy risk.

An IT policy is supposed to create order by clearly defining which platforms should be used and how, as well as what exactly team members should have access to with their personal devices and accounts. It should also point team members to appropriate documentation, training and ways to ask specific questions. It should be easily accessible to anyone, especially new hires, so they can consult at any point.

I say "supposed to" because most of them aren't. Even if you have an organisational IT policy, it is probably a simple list of tools, a couple of warnings of what not to do based on traumatic experiences (if there is a warning, there is a story) and hasn't been updated in a handful of years. Overall, keeping this up to date is not the highest priority, and the the people responsible for the policy have usually been around long enough to assume everyone else is familiar with the tools and rules.

When AI-based chatbot tools started to get popular, the same confusion started all over again: some team members wanted to test it, some wanted the organisation to take a strong stance against it, some hadn't even heard of AI yet. The leadership team rushed to a consensus and built a separate beast in the form of an AI policy, trying to guide the team on how to use it for work. But with the technology advancing quickly and platforms adding new AI features every day, how to keep yet another policy up to date? Now that your email provider has an AI feature, where should guidelines for its use sit?

An optimised approach is having a complete, up-to-date IT policy that considers different types of technology, including AI, and how different types of information should be available in each one.

You don't need a perfect IT policy from day one; start with the essentials and build over time. A basic policy that is implemented and followed is far more valuable than a detailed one that is unused.

A practical approach would be:

  • Begin by documenting which tools your team currently uses, what data they handle, and where potential vulnerabilities exist
  • Use a template that you can customise for your organisation's specific needs
  • Ensure all staff understand the policy, why it exists, and how to implement it in their daily work. Regular training sessions and clear documentation help maintain compliance
  • The policy should be easily accessible to all staff and written in clear, practical language
  • Remember that technology changes rapidly. Schedule regular policy reviews (every three or six months) to assess new tools, update guidelines, and incorporate lessons learned

Tecer Digital offers free IT policy templates designed specifically for nonprofits and social impact organisations. We also offer support for organisations in need of assistance to develop and update them, together with training frameworks to ensure the team is comfortable with the newest technology available, especially AI-based tools. If you want to know more, get in touch with us.