AI – is it ever okay to use it for media commentary?
Photo by Immo Wegmann on Unsplash
AI is increasingly being used to create PR content - articles, reports, media commentary and more. Indeed, a recent report from the Chartered Institute of Public Relations (CIPR) found that as much as 40% of tasks performed by PRs may be assisted by AI tools. (Worryingly, though, only two in five claim to understand the ethical implications of using it).
Our view? Ignoring AI isn’t an option. It’s helpful for starting brainstorms, it can make sense of long meetings transcribing views for effective use; it can help create a logical flow of thoughts (who doesn’t need that on a Monday morning) and it can help research ideas.
But it cannot - and must not - be used for everything.
One absolute no-no is using AI to answer journalist enquiries.
Reporters need human insight. They want new and real knowledge - from real people, in real organisations, from real experts in the sector. This is about new things happening, little adjustments noticed, not re-worked information gained through AI applications such as ChatGPT.
Indeed, AI offers plenty of open-source information, but it can be incredibly ‘vanilla’. As a general rule, the output of Large Language Model (LLM) algorithms, which most ChatGPT-style applications use, will produce an output that’s been seen and heard before. As generative-AI continues to develop and become mainstream, this may cease to be the case.
At the moment, there are no quirks of human nature. There are no gems of how a human might describe something, there are no trends spotted because an expert knows their sector so well, bringing lived experience and an acquired knowledge far better than AI could.
Aside from the lack of human touch and insight, there is a huge potential for sharing inaccurate information. The web is already a minefield for misinformation, and this is not something journalists want to cite. And your brand doesn’t want to be part of sharing it either.
The same goes for data. Whilst the opportunities for AI to analyse data are seemingly endless, human intervention is still essential to ensure the data is both reliable and representative. It is thought that AI can unintentionally reinforce biases if trained on skewed data. This is especially concerning in HR, where inclusivity and fairness are crucial. Left unchecked, such biases can harm brand reputation and alienate target audiences.
The ethical and copyright implications surrounding AI are arguably the most important points to consider, but are still somewhat blurred, although the information surrounding the issue all points to the same conclusion.
There have been a number of legal cases surrounding copyright. In the Harvard Business Review, Gil Appel, Juliana Neelbauer and David A. Schweidel comment on the implications of using AI on intellectual property infringement, the uncertainty surrounding the ownership of AI-generated work and questions about unlicensed content in training data.
They say that “all this uncertainty presents a slew of challenges for companies that use generative AI. There are risks regarding infringement — direct or unintentional.”
Lawyer JJ Shaw comments in the Press Gazette, “Unlike in the US, UK legislation does provide copyright protection to computer-generated works – although only a human author or corporate persona can “own” the IP (never the AI itself).” In practice, the ‘owner’ comes down to who would likely be deemed the ‘author’ - the person creating the prompts in the AI application, or the person that created the content sourced through the platform.
Shaw goes on to say that ultimately “there is no way of knowing whose written works have been used to train the AI or are being drawn on to generate the requested output”.
So with no way of referencing the original work, using commentary from AI tools could be deemed as having copyright complexities.
With this is mind, offering commentary to journalists from a professional’s viewpoint, and it turns out to be information from an AI chatbot, such as ChatGPT, then both the agency and professional risk reputational damage to both their brand and expert viewpoints. This also accepts liability for any AI-generated content published, including defamation and misinformation.
Commentary - Whilst AI is reshaping how professionals approach their work, and is very much here to stay, using it shouldn’t be banned in the PR space.
By utilising AI carefully, professionals struggling to put their thoughts into coherent, succinct and compelling narratives may be able to reproduce their own content so that it’s easier to digest by the reader.
But given the ethical and legal issues still surrounding the use of AI, by doing this, we recommend letting your PR agency know.
As stated in the CIPR report on Artificial Intelligence (AI) tools and the impact on public relations (PR) practice, PR professionals everywhere will have to be constantly mindful of the ethical and legal considerations of how they might use AI - as well as advising their own clients around the reputational and other implications of the deployment of AI for any purpose.
PR in HR is best place to increase recognition of your HR brand.
We know that gaining valuable PR wins in the workplace media is vital, but it’s also complex.
Amplifying your messages with PR strengthens brand awareness and credibility to a much wider buying audience. It helps build trust. It helps grow your brand.
Problematically, there are hundreds of voices clamouring to be heard by national, HR and workplace B2B journalists and influencers, and only a relative few get into the media spotlight each day.
We help your brand to be one of them, repeatedly.
We use powerful PR so our clients - providers to the HR market - rise above the noise to gain exceptional brand recognition.