Privacy in the Age of AI
4 / 5AI and privacy intersect in several ways that every professional using AI tools should understand. This lesson covers the practical privacy considerations for everyday AI use.
What Happens to What You Type
- When you type into a consumer AI tool (ChatGPT, Claude.ai, Gemini), your inputs may be:
- Used to improve the model (though most tools offer opt-outs)
- Stored by the company for a period of time
- Accessible to the company's employees in some circumstances
- Subpoenaed by law enforcement with appropriate legal process
This does not mean these tools are unsafe for general use. It means they are not appropriate for certain kinds of sensitive content.
What You Should Not Put in Consumer AI Tools
Personal data about third parties Pasting a customer's full name, contact details, or personal circumstances into ChatGPT is a data handling decision with potential GDPR, CCPA, or other regulatory implications.
Employee performance information Details about individual employees' performance, compensation, or personal circumstances.
Confidential business information Strategic plans, unreleased product information, M&A discussions, client confidential information.
Credentials and access keys Never paste API keys, passwords, or authentication tokens into AI tools.
Enterprise vs. Consumer AI Tools
- Enterprise versions of AI tools (ChatGPT Enterprise, Claude for Enterprise, Microsoft 365 Copilot) typically offer stronger privacy protections:
- Inputs are not used for model training
- Data stays within your organisation's tenancy
- Stronger contractual data handling commitments
For organisations handling sensitive data, the distinction between consumer and enterprise tier is significant.
Privacy by Design in AI Projects
For anyone building applications or systems that use AI, privacy principles that apply to software generally apply doubly:
Data minimisation: Collect and pass to AI systems only what is necessary for the task
Purpose limitation: Use data only for the purpose it was collected for
Transparency: Tell users when and how AI is being used to process their information
Consent: Where required, obtain informed consent before using personal data with AI systems
The Inference Risk
AI systems can sometimes infer sensitive information from non-sensitive inputs. A model trained on population data might be able to infer health status from purchase patterns, or political affiliation from location data.
This is an emerging area of privacy concern — the idea that sensitive attributes can be derived from seemingly innocuous data inputs. It is worth being aware of, particularly if you are building AI applications that process user data.