“Co-created by AI” disclosure

Written by:

I joined a very interesting CIPR Crisis Comms Network ‘In conversation with Phillipe Borremans’ webinar. This got me thinking about the differences between risk communication, emergency communication and crisis communication. However, it was Phillipe’s view that, as communicators, we need ethical guidelines on using AI, which made me deeply reflect. He spoke about labelling content as “co-created by AI”, and that new EU law compelled this for high-level, high-risk comms. Living in an EU country, I wanted to check this out. Using AI, of course! 

Perplexity informed me that the EU AI Act, specifically Article 50, establishes transparency obligations for certain types of AI-generated content. The Act entered into force on 1 August 2024, but the transparency obligations will not apply until 2 August 2026. The requirements apply to two main categories:

  • Deepfakes and Synthetic Media. AI users who generate or manipulate image, audio, or video content must clearly disclose that the content has been artificially generated or manipulated. This applies to content that “resembles real people, objects, places, facilities, or events and would be likely to mislead a person into believing it is real or true”.
  • AI-Generated Text Content. For text content, the labelling requirement is more limited. It only applies to text that is “published with the purpose of informing the public on matters of public interest”. This covers content relevant to political, social, economic, cultural, or scientific opinion.

In true EU form, there is a significant exception to this for content that undergoes human oversight. Article 50(4) states that the disclosure obligation does not apply when the AI-generated content has undergone a process of human review or editorial control, and a natural or legal person holds editorial responsibility for the publication of the content.

This exception means that content co-written with AI assistance, where a human editor reviews, fact-checks, and takes responsibility for the final output, would not require labelling as AI-generated. This human review exception is particularly relevant for journalistic content where AI assists with writing, but editors review and approve the final text; 

PR practitioners where AI helps draft content but humans maintain editorial control; and academic or business writing where AI provides assistance but humans take responsibility for accuracy and final approval. 

The key is that the human intervention must be meaningful and that someone takes editorial responsibility for the content’s accuracy and appropriateness. 

Back to good old UK, and there are no plans to introduce an AI law as strict or as comprehensive as the EU AI Act. Instead, the current UK approach to AI regulation is deliberately more light-touch, favouring flexible, principles-based guidance over prescriptive, centrally mandated legislation. I asked a group of PR practitioners this week whether they thought there should be legislation or voluntary guidance. Views differed. Some thought legislation was necessary to combat deepfakes deceiving people. Others took a pragmatic view that AI forms part of many of the tools we use. For example, improving images (like Photoshop) or text clarity (like Grammarly). Do we add a disclosure to most content? We thought the balance should be that when AI is used to create content, a voluntary disclosure would be ethical practice. However, some saw disclosure as eroding trust among their audiences, and some saw it as increasing trust.

I already recommend to my PR apprentices that they label images with their intellectual property rights (for example, the name of the photographer and source) and Alt Text for people using screen readers to assist them. I have noticed more and more people adding such labels, myself included! 

Perhaps a “co-created with AI” disclosure would be good when AI has played a significant role in the creation of that content. Maybe it would put an end to the seemingly endless debates about how many articles on LinkedIn are created by AI. And, worse, if someone uses the good old double em, then they must be AI!

I would welcome guidance from the CIPR and PRCA on an ethical voluntary disclosure practice when using AI tools. There is some great expertise on AI among members in both organisations. Want to pick up the baton, CIPR and PRCA?

[Image of relay baton being passed between runners, by Boom via Pexels]

Leave a comment