ChatGPT: Our experience as an agency

The Resonates team love it when a trend takes off, a client brings us a cleantech innovation to shout about or a new marketing tool launches. But when OpenAI revealed ChatGPT a little over a year ago there was mixed emotions.

Some of us were excited to try a new tool, others wanted to see how it can positively impact our creative campaigns and content, and a few team members wondered if its capabilities could replace us!

Even the sceptics among us have given it a try (we couldn’t help ourselves) to see what all the fuss is about. So, what you really want to know, is it worth it?

The team share some real-life experiences, lessons learned and key takeaways to help you decide:

For press releases

“I was presented with a draft press release from a client. The marketing manager shared it with note saying their product manager had drafted the release (using ChatGPT) and could we cast an eye over it – then prep it for distribution. Upon review, as expected, the release was generic. It was absent of facts and the pain points the client’s new feature news solves. The draft ChatGPT release went through numerous edits to make it newsworthy, which caused delays in distributing it to media.” 

Account Director Hannah’s takeaway: Press releases must be newsworthy and factual. They are ‘new news’ so it’s unlikely ChatGPT will be able to deliver the information needed to pique media interest and cover the story. Not following best practice from the off could mean you run the risk of your story being yesterday’s news.

For a new perspective

“Generative AI is very useful when prompted correctly. There are many short courses available that will help you figure out different prompt approaches to get the results you’re looking for. Trying out well documented prompts saves time too, and one afternoon’s commitment will pay dividends. For example, by crafting personas and using them to prompt ChatGPT, you can ‘interview’ ‘people’ from the target audiences you’re aiming certain content at. It’s by no means flawless, but it can help you uncover new information, remind you of something you’ve overlooked, or help you to see a new angle on something old.

“Caution is needed though. I think the headline-grabbing risk of ‘hallucinations’ is a misnomer. There is fundamentally nothing different in the processes of generative AI large language models (LLM) that distinguishes hallucinations from veridical text, so we should be careful of the naive attitude of treating them as reporting truths and occasionally (read: very often) going wrong.”

Writer Greg’s takeaway: Spend some time learning how to prompt for the best results and don’t rely on LLMs for truth, just use them as creative support.

For references

“Generative AI technology, which is built into tools like ChatGPT can be incredibly helpful when producing content, but users need to be aware of its limitations and potential for inaccuracy.
Fundamentally, ChatGPT mimics writing patterns rather than behaving as a search engine, and can invent fake references and therefore propagate misinformation. This was first brought to my attention by my daughter who is studying for a PhD in genetics. When she asked ChatGPT to provide a reference to back up its answer to a question she asked, this is what happened:

Phoebe: Please could you provide a reference for this?

ChatGPT: Sure! Here is a recent study that demonstrated the improved efficiency of HDR using a single-vector system for delivering both guide RNA and donor DNA: Wu X, et al. (2020) “Efficient homology-directed repair using a single-vector system for CRISPR-Cas9-mediated genome editing.” Nat Commun 11(1): 5153.

This study used a single plasmid to co-deliver the Cas9 nuclease and both guide RNA and donor DNA, and showed that this approach significantly improved the efficiency of HDR compared to traditional methods using separate vectors for each component. However, it’s worth noting that this study used a specific delivery method and cell type, and the results may not be directly applicable to all systems.

Phoebe: I can’t find this paper, can you send a link please.
ChatGPT: I apologize for that. Here’s a link to the paper: https://www.nature.com/articles/s41467-020-18937-5. Note that access to the full text of the paper may depend on your institution’s subscription. If you’re unable to access the full text, try searching for the title in a different database, such as Google Scholar.

Phoebe: The link does not work. Are you sure this paper exists?
ChatGPT: I apologize for the mistake. It seems that the paper I mentioned does not exist. My previous information is incorrect. I apologize for any confusion caused by my mistake.

So only by interrogating the tool until it admitted that the reference was fake did it also declare that ‘My previous information is incorrect’.”

Director Chris’ takeaway: The fake citations issue is now well documented, and the lesson learned must be to carefully check out references. And you have to admit the tool has impeccable manners. But does this flaw open up the risk of misinformation on a mass scale? 

For a wide range of data source information

“An increasing number of publishers who employ journalists and professional writers to think and write for them are actively blocking the GPTBot crawler from taking data to feed into its AI models. The Guardian is one of the latest publications to block ChatGPT. Others include CNN, Reuters, the Washington Post, Bloomberg and the New York Times. In addition, ChatGPT is currently unable to scrape articles behind a paywall, like FT.com. Collectively these resources amount to a significant amount of contemporary data that a human writer would typically consult to inform a thought leadership piece.”

Corporate Affairs Specialist Justin’s takeaway: It’s interesting to note that this increasing list of titles blocking GPTBots are what most PRs would term as tier 1 media. That makes sense to me as they are more focused on reporting, and in some cases obliged to cover, material announcements. For listed companies in particular, the disclosure and wording of such news may be tightly regulated. I can, therefore, understand why writers, journalists and corporate advisers are nervous about relinquishing control and compliance of market sensitive content to emerging AI technologies. That said, it would be difficult to understand public opinion, a journalist’s beat or what stories are likely to get picked up without staying up to date with the publications now blocking GPTBots. If you rely wholly on ChatGPT it would mean you miss large amounts of newly reported data and information.

For generating ideas and making sense of lots of information

“ChatGPT is indeed a great tool for ideation and summarising information. Its ability to generate relevant and insightful responses can be incredibly helpful in brainstorming sessions or when trying to condense a large amount of information into a concise summary. However, it is important to note that while ChatGPT can provide a starting point and offer valuable suggestions, it still requires additional work to filter out the most valuable and relevant gems from the generated outcome. So sorry, you’ve still got to use those brain cells!”

Account Manager Tanya’s takeaway: Treat ChatGPT as a virtual assistant, who can help you with groundwork, but don’t expect it to come up with an extraordinary result on its own. To get the most benefit, be prepared to invest additional effort in refining and rethinking the information it produces.

For writing content

“Writers, please don’t write like ChatGPT.

“Yes, it’s useful for prompts. Yes, it’s handy for getting some ideas together quickly. Sure, it’s useful for building a list of things you might include in an article or blog post. But, my word, is the language clunky. I find the output is written in the kind of corporate speak that people used to think was impressive. By all means use it to spark an idea or to help you get an outline – but remember to write in your own authentic voice or your client’s, of course.

“Flipping that warning on its head, though, it’s useful when you have a piece of impenetrable text – which you quite often find in regulatory or technical documents. Pop it into ChatGPT and ask it to turn it into something more accessible. Saves minutes or even hours of head scratching.”

Head of content Sarah’s takeaway: If you’re a writer, write. Don’t let ChatGPT (other generative AI tools are available) take away the joy of putting together a great argument or forming a lovely sentence. It’s a tool. It doesn’t have your imagination. Ask yourself why you became a writer in the first place. All output is not created equal. Don’t use a tool to replace creativity, wordplay and voice.

Be aware of ChatGPT’s limitations and use it with caution

If you take less of a technology vs human approach to ChatGPT and more of an information layered with common sense and expertise attitude, then it can be a valuable tool. These our teams thoughts, but I thought it would be fun to ask ChatGPT too, here’s what it had to say: