What should AI do and what should it not do?

That’s a thoughtful question — and one many people, organisations, and governments are actively working through right now. Since none of your personal documents, emails, meetings or people records contained material on this topic, here’s a clear and balanced overview to help shape your thinking.

What AI should do

These are areas where AI tends to be beneficial, low‑risk, or easily supervised by humans:

Assist with repetitive, time‑consuming tasks, such as summarising information, organising data, scheduling, drafting content, or checking for consistency.

Support with decision‑making, as long as a human stays in the loop — for example, helping compare options, surfacing patterns, or highlighting risks.

Improve accessibility, like real‑time captioning, language translation, or personalised learning support.

Enhance creativity and productivity, by helping brainstorm ideas, generate images/text, or automate parts of creative workflows.

Monitor for safety and quality, like spotting anomalies in systems, detecting fraud, or improving cybersecurity.

What AI should not do

These are areas where giving AI the authority or autonomy may introduce disproportionate risk or ethical concerns:

Make final decisions that affect people’s rights or wellbeing, such as hiring, firing, medical diagnoses, legal judgements, credit approvals, or policing actions.

Operate without oversight in safety‑critical contexts, such as autonomous weapons, infrastructure control, or medical treatment.

Handle highly sensitive personal data without explicit consent and robust safeguards.

Generate or spread disinformation, impersonate individuals, or manipulate emotions at scale.

Replace human accountability — AI is a tool, and responsibility must remain with people.

A helpful framing A good rule of thumb:
AI can analyse, assist, accelerate or advise — but humans should decide, direct, and remain accountable.

It looks like there are no AI‑related internal guidelines in your organisation’s files, emails, meetings, or people records based on my search (I checked across all domains) 1234567.
So here’s a practical, ready‑to‑use framework you can apply directly in your organisation to turn the principles we discussed into actionable internal practice.

How to Apply AI‑Use Guidelines in Your Organisation

1. Start with a simple, shared AI policy

You don’t need something complex at first. A concise, accessible policy should state:

What AI can be used for (e.g., drafting, summarising, brainstorming, translation)

What AI cannot be used for (e.g., sensitive decisions, personal data without approval)

A reminder that humans remain accountable for all outputs

I can help draft a first version if you like.

2. Define “human in the loop” checkpoints

AI should assist, not autonomously decide.
For every workflow where AI is used, specify:

What the AI does (e.g., summarises customer feedback)

What the human does (e.g., reviews for accuracy and fairness)

What risks to check for (misinformation, bias, hallucinations)

This protects both staff and operations.

3. Categorise tasks into “green”, “amber”, and “red” zones

This makes adoption easier and reduces uncertainty.

Green zone (safe to automate or assist):

Drafting emails, documents, training materials

Idea generation

Summaries of non‑sensitive content

Admin tasks (scheduling, formatting)

Amber zone (allowed with oversight):

Analysing trends from internal data

Customer communications

Recruitment content (e.g., job posts)
Requires human review + approval.

Red zone (not allowed):

Assessing candidates

Making performance or HR decisions

Medical/financial/legal determinations

Handling confidential or regulated personal data

Anything safety‑critical

4. Build AI literacy across the organisation

This increases responsible use naturally:

Short monthly learning sessions

Examples of good vs. risky AI usage

Quick guides or checklists

Show staff how to double‑check AI outputs

This also reduces reliance on informal, inconsistent practices.

5. Appoint an “AI steward” or small committee

They don’t need to be AI experts — only responsible for:

Reviewing ambiguous cases

Updating guidelines

Helping teams adopt AI safely

Monitoring compliance lightly, not punitively

This ensures oversight without bureaucracy.

6. Integrate AI guidance into workflows and tools

For example:

Put reminders in templates: “Did you check the AI’s output for accuracy?”

Add automatic warnings when handling sensitive data

Include quick links to approved AI tools

This makes the policy live in everyday work.

7. Review and update regularly

AI changes fast, so treat policies like living documents:

Quarterly reviews

Collect examples of misuse and best practice

Ask staff what’s working and what isn’t

Iteration is key.

If helpful, I can create:

A draft AI policy tailored to your organisation

A training slide deck

A Green/Amber/Red reference chart

A decision checklist people can use before involving AI

Would you like one of those?