Your Responsibilities as an AI User
5 / 5Ethics is not only about what AI companies and governments do. Individual users also have responsibilities. This lesson is a practical guide to those responsibilities.
The Agency Problem
A common framing: AI is a tool, and tools are neutral. Responsibility lies with those who build them.
This framing is too simple. Users shape how AI tools are used, what content they produce and distribute, and what norms become acceptable. Individual choices aggregate into social outcomes.
This does not mean users bear all responsibility — the distribution of responsibility matters and companies bear more. But it does mean users are not passive.
Your Core Responsibilities
Verify before you publish If AI-generated content will reach an audience — customers, colleagues, the public — you are responsible for its accuracy. The fact that AI generated it is not a defence for publishing misinformation.
- Be transparent about AI use where it matters
- Not every AI use requires disclosure. But there are contexts where transparency is important:
- Academic work (follow your institution's guidelines)
- Journalism and editorial content
- Legal filings and professional advice
- Any context where the audience has a reasonable expectation of human authorship
Do not use AI to deceive or manipulate Using AI to generate fake reviews, impersonate people, create misleading content, or manipulate public opinion are ethical violations that can also be legal violations.
Protect other people's data Do not put other people's personal information into AI tools without appropriate basis. You are a data controller for information about others, even in casual professional use.
Consider the downstream effects of your AI use If you automate a task with AI, consider whether that automation affects other people (employees, contractors, communities) in ways that warrant consideration.
Building Good Habits
Default to verification: Treat AI outputs as drafts to be verified, not facts to be published
Stay informed: AI capabilities and risks are evolving. Updating your understanding is ongoing
Ask the ethical questions: When deploying AI in professional contexts, ask the questions from lesson one: What data? Who bears the cost of errors? Is there human oversight? Who is accountable?
Speak up in organisations: If you see AI being used in ways that raise ethical concerns, raise them. Good AI governance requires people willing to ask hard questions.
The Bigger Picture
The norms around AI use are being established right now. The decisions individuals and organisations make today — what uses are acceptable, what transparency is required, what safeguards are necessary — will shape the AI landscape for years.
Responsible use is not just about avoiding harm to yourself. It is about contributing to a shared environment where AI is used in ways that are trustworthy, fair, and genuinely beneficial.