Your tech questions

Your tech questions

When I started writing the first draft of what the AI Literacy Lab was supposed to be, three pillars were evident:

  1. It should focus on practical, actionable information, shaped by needs and challenges from the organisation we're working with
  2. It should give context to allow for continuous critical thinking about AI
  3. It should continue over time, understanding that developments are happening every day

It took a while to arrive at the third pilar, but here I am. Through this articles, I'll bring some of the tech questions I received during workshops, via email or in other conversations, and attempt to answer them as clearly as possible. Please keep in mind that there is a lot changing in the field of AI (and technology as a whole), so information can become obsolete quite quickly.

If you have a question, please share at hello@tecer.digital. Questions will be anonymous (unless explicitly requested otherwise). No questions are too basic (or advanced), too simple (or complicated), too specific (or broad). There is a lot of people writing about AI today, and I don't want to be just another one; my proposal is to consider the context of purpose-led organisations in every response so you don't have to try to apply technical concepts and ideas into your field.

Is the information I enter into AI secure?

Since AI is too broad of a term to refer to anything today, let's confirm we are talking about widely available general purpose LLMs, usually presented in the form of chatbots, like ChatGPT, Claude, and Gemini.

Before going into the tools specifically, one question is important: what do you mean by secure?

Different types of information need different levels of security. In the context of an organisation, should be mapped and described in your IT or digital security policy. This is usually classified as:

  1. Public: anything that is already available in the organisation's website, for example
  2. Internal: information not publicly available, but accessible to all team members and possibly partners or volunteers
  3. Confidential / Sensitive: information that can only be accessed by specific people (such as financial statements) or that can put other individuals at risk (such as personal details)

When you are using an online platform (anything connected to the internet!), the information you are sharing is uploaded and available in this company or organisation's servers. This means that, in theory, this information is available to everyone who has access to this servers, as well as any systems connected to the platform and its database. Privacy and security policies exist to ensure that these accesses and connections are transparent to users of these platforms (even though most people don't read them).

But AI-based platforms go a step beyond that: companies don't only store the data, but also use it to train future models. And once it is used for training, it can't simply be deleted; even if the company specifies that a piece of information cannot be used, the systems are too unpredictable for anyone to ensure it won't be shared as output at some point.

Before using any AI-based tool, find out:

  • Who stores and has access to the inputted information (usually available within privacy policies in the company's website)
  • If the information is used for training (this is true for almost all free tools, and even in paid ones you usually have to manually disable this option)

Long story short: generally, only input information that is already public (or will become public soon) into these tools. Anything internal or confidential should only be shared in platforms and tools approved and managed by your organisation. Don't use your personal accounts to do any work.

I know the focus is on ChatGPT and similars, but this includes some of the tools most of us are so used to we don't even consider as possible security risks, such as DeepL, Google Translate (or any Google free services such as Search, Gmail, Calendar and Drive) or Grammarly.

If you are looking for a private chatbot for personal use, one of the founders of Signal recently launched Confer. I haven't tested it extensively, but it seems to be a much safer option to any other competitor.

Something on your mind?

Ask a question

Read more