What this means for businesses using AI

Artificial intelligence is no longer sitting in a regulatory grey area. This month, Britain’s media regulator opened a formal investigation into X (formerly Twitter) over the hosting of AI-generated sexual imagery. The investigation relies on new criminal offences introduced under recent online safety legislation and marks one of the first times those powers have been used in practice.

While the headlines focus on a major social media platform, the implications are much wider than tech giants alone.

What’s actually happened?

The regulator’s concern is the creation and circulation of deepfake content that is harmful, non-consensual and misleading. New legislation places clear duties on platforms to prevent, remove and respond to this type of material. Failure to do so can now trigger formal enforcement action, not just criticism or voluntary undertakings.

This investigation signals that regulators are prepared to test and use their powers early, rather than waiting for years of guidance and case law to settle.

Why regulators care

Deepfake technology presents obvious risks. It can damage individuals, undermine trust, distort information and be weaponised very quickly. Regulators are increasingly treating AI-driven harm as a governance issue rather than a technical one, which means recognising it as a board-level risk.

The key shift is this. AI use is no longer judged only on innovation or efficiency, but on control, accountability and foresight.

Why this matters to businesses

Many organisations now use AI in ways they would not describe as advanced or risky. Marketing copy tools, image generators, recruitment screening software, customer chatbots and internal automation all fall under the same broad umbrella.

Even if your business does not develop AI, you may still be responsible for how AI tools are deployed, supervised and corrected. Regulators are less interested in whether harm was intentional and more focused on whether reasonable steps were taken to prevent it.

This is particularly relevant for directors and senior managers who hold responsibility for risk management and compliance frameworks.

Common questions 

Do we need an AI policy if we’re not a tech company?
Yes. If AI is used in your business processes, a simple, clear policy helps demonstrate oversight and intention.

Are we liable for content created by AI tools?
Potentially. Responsibility does not disappear because content was generated automatically.

Is guidance settled yet?
No. Enforcement is moving faster than formal guidance, which increases risk for organisations relying on assumptions.

A simple compliance sense-check

It may be time to review:

These steps are increasingly being viewed as basic governance rather than best practice.

Useful guidance to follow
Final thought

This investigation is less about one platform and more about a clear regulatory message. AI governance is no longer aware-and-adapt territory. It is fast becoming enforce-and-penalise.

If your organisation is using AI in any form, now is the moment to treat it as a compliance issue, not just a productivity tool.

Cyber Essentials certification is a UK government backed security standard that demonstrates an organisation has implemented the key technical controls needed to protect against the most common cyber threats.

This field is for validation purposes and should be left unchanged.
Name(Required)