Menu
ChatGPT has gained a lot of attention recently and the SEO community isn’t immune. Search marketers are putting the tool through its paces to find out how helpful – or harmful – it may be as an SEO tool.
Our team has tested ChatGPT over the past few months, too. We’ve used it to see how it fairs generating content, coming up with content ideas, writing ad headlines and a variety of other search marketing tasks. By now, we have a good understanding of what the current version of ChatGPT is capable of. This article explains the good and bad of the AI system as a search marketing tool.
ChatGPT is an AI conversational platform designed to generate human-like responses to user prompts. Prompts can include questions like “who was the UK’s first Prime Minister” or requests for the system to write a poem, translate a sentence or a variety of tasks.
As an example, let’s see what it comes up with when we ask the question: “what is ChatGPT?”
“I am ChatGPT, a large language model developed by OpenAI. I am designed to understand natural language and generate responses to questions and prompts in a way that resembles human conversation. I was trained on a large corpus of text data, which allows me to draw upon a wide range of knowledge and information to answer questions and provide helpful responses. My purpose is to assist users with their inquiries and provide information and insights on a variety of topics.”
In OpenAI’s own words, ChatGPT was publicly released “to get users’ feedback and learn about its strengths and weaknesses.”
So ChatGPT was never launched as an SEO tool or intended to be a reliable source of information – and it’s important to remember this. Functionally, it’s a capable tool for a wide variety of tasks but we have to understand its limitations.
Let’s start by discussing what ChatGPT is good at.
ChatGPT is very good at what it’s designed for: conversational experiences. The most important components of this are its natural language processing (NLP) system and its response generation model, powered by vast amounts of information in its knowledge base.
We’ve tested ChatGPT extensively in recent months and its strengths are pretty clear by this point:
If you’ve never used a tool like ChatGPT before, it’s difficult not to be impressed. Its natural language processing (NLP) system is as good as any at understanding user prompts. Then you’ve got its question-answering model that performs very well with relevance, grammatical accuracy and basic tasks like definitions or suggesting ideas.
That being said, it isn’t actually suggesting any ideas at all – it’s pulling them from online sources and combining them into an auto-generated response.
If you’ve tested any AI-content tools in the past, ChatGPT is a step up from most of the popular names. ChatGPT’s content is more fluid and natural while the system does a better (but not particularly good) job of constructing logical sentences that make a valid point.
All this aside, ChatGPT still suffers from the same weaknesses as any other AI generative tool.
At a glance, it’s easy to see why ChatGPT’s output generates so much excitement. However, as soon as you analyse the information in its responses and understand where this comes from, you’ll notice issues arising.
This list of weaknesses raises obvious issues from an SEO perspective, especially if you expect to use ChatGPT as a content-generation tool. When accuracy, recency, expertise, originality and almost every other factor listed above are crucial for quality content, the outlook is poor from the beginning.
ChatGPT still has its uses to search marketers though and we’ll discuss these later.
To see what ChatGPT’s strengths and weaknesses mean in practical terms for search marketers, let’s look at some examples of what happened when we tested it for different SEO tasks.
We’ve tested ChatGPT for every task we could think of – everything from writing content to generating code. For SEO specifically, we feel these three tasks give the fairest representation of what ChatGPT can and can’t do:
Generating content ideas is probably ChatGPT’s biggest strength as a search marketing tool. For example, we prompted the tool to come up with ideas for one customer on the topic of refinancing in the hospitality industry.
One simple prompt generated the following response with 10 ideas:
Note that suggestion #3 refers to low interest rates, which is a good practical example of how ChatGPT’s information isn’t up to date. Even still, we’re only trying to get some content ideas from this interaction and all we have to do is change the topic to high interest rates.
In this particular case, ChatGPT was genuinely useful and it’s pretty good at coming up with title ideas. You can also use this to come up with subheading ideas for individual pieces or come up with some new keyword ideas.
Generating content ideas is one thing but writing a full piece of content is something else entirely. You can test this with a variety of prompts, such as “write a 500-word blog post on the top SEO trends for 2023”.
For this, ChatGPT will generate ~500 words of content on the topic you specify. In terms of relevance, the content is generally spot on and it rarely stutters with grammar errors or spelling mistakes.
However, as we touched on earlier, it runs into several key problems:
We asked ChatGPT to write a blog post on SEO trends for 2023 so, before we look at snippets from the content itself, let’s see which trends it decided to write about:
These selections aren’t too bad and you get a sense of how ChatGPT can suggest ideas for subheadings in your content. The only problem is several of these trends are a little dated in 2023 and they’re all very generic. There’s nothing here that really stands out from the thousands of other articles published on the same topic.
If we take a closer look at the content ChatGPT generates, bigger issues become obvious.
We ran the same prompt through ChatGPT multiple times and it returned almost identical suggestions. So what happens if thousands of agencies turn to ChatGPT to write an article on SEO trends for 2024 at the end of the year? Is everyone going to end up with the same content?
None of the content ChatGPT generates offered anything of depth or real value. Take a look at what it has to say about local search as an example:
Suggesting local search is an SEO trend in 2023 (maybe in 2014) is an underwhelming start. In fact, the whole passage reads like something written a decade ago, mentioning the rise of voice search and mobile devices as if these are also new things. It’s all very generic and lacks any depth or value.
We’ve also tested ChatGPT for writing ad headlines for several of our PPC customers.
It really struggled with adhering to the constraints of headlines in Google Ads – for example, not using exclamation marks and the max 30-character limit. When we tried to correct ChatGPT with follow-up prompts, it actually wrote longer headlines and included exclamation marks.
It’s worth noting that ChatGPT’s algorithm is biased towards longer responses. OpenAI acknowledges this as one of its limitations: “These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues”.
The quality of ChatGPT’s copywriting is pretty poor as well, but this shouldn’t surprise anyone. Copywriting is a highly-specialised niche. ChatGPT is engineered to adhere to natural language but advertising language breaks all of the rules ChatGPT was trained with.
The fastest measurement of ChatGPT as an SEO tool is to compare the content it generates against Google’s quality rater guidelines.
These guidelines are used by Google’s human team of quality raters who are tasked with assessing the quality of its search results. They manually analyse web pages and grade them on an extensive range of factors grouped into four key characteristics:
For a more in-depth explanation, take a look at our analysis of Google’s latest quality rater guidelines (E-E-A-T).
Interestingly, when we asked ChatGPT to write a blog post on key search trends for 2023, it included E-A-T (the older version) and insisted how important they are.
The system’s information may be out of date but it’s right about the importance of E-E-A-T, as highlighted by the recent update to the guidelines. It’s no coincidence that Google is adding more detail to its guidelines at a time when AI-content tools are going mainstream.
As a result, E-E-A-T will become even more important as Google needs to verify the quality of content produced by experts.
So let’s see how ChatGPT stacks up against Google’s four most important characteristics of content quality:
Google has been in the AI game for a long time and it knew exactly what kind of AI content tools were coming to market before most of us. It knows the strengths and weaknesses of this technology inside out and it also knows millions of publishers would use it without any consideration of the risks.
AI content tools are here to stay but Google’s quality demands will only increase as they become more capable.
Yes, ChatGPT will only improve over time – as with all similar AI systems. As a point of reference, the next version of ChatGPT (ChatGPT-4) is expected to replace the current version soon and this will expand the system’s neural connection parameters from 175 billion to 100 trillion.
That will put ChatGPT-4 in the region of lower estimates for the number of parameters in the human brain.
Does this mean ChatGPT-4 will match the thinking power of human beings? Well, no. In fact, OpenAI CEO Sam Altman has said the hype surrounding ChatGPT is setting people up for disappointment with the next version.
You only have to look at the disappointing starts for Google’s Bard and Bing’s new AI search engine to get a sense of the reception ChatGPT-4 could receive. In reality, Google and Microsoft have equally capable AI systems but they couldn’t live up to the ChatGPT hype.
All this aside, ChatGPT and its rivals will all improve over time and the big question is where does OpenAI want to go with this technology? As things stand, ChatGPT is an experimental conversational platform that’s already capable of much more.
Let’s say OpenAI wants to develop a content generation tool, similar to its AI image generator, DALL·E 2. Like ChatGPT responses, images generated by DALL·E 2 use data from relevant online images so it has similar issues with originality.
However, at what point do tools like ChatGPT and DALL·E 2 have enough neural parameters and use enough data to produce content that’s similarly original to human beings? After all, the question of whether human beings can ever come up with anything truly original is highly debated.
So how original is original enough?
Let’s wrap things up with a quick summary of what ChatGPT is good and bad at as an SEO tool.
Good at:
Bad at:
If you want to learn more about the pros and cons of ChatGPT as an SEO tool, our team is ready to offer advice. Call us on 02392 830 281 to speak to our search marketing team or send us your details for more information.
While working as a consultant for the 85 Broads Network in New York City in 2004 (now Ellevate), Ben was asked to ‘look into’ SEO for the company website. Ben then formed his own company, Pebble SEO, to offer SEO and adjunct digital services to the US market, expanding into PPC, design and content writing services. Arriving back in the UK in 2010, Ben joined a small company on the Isle of Wight as a business development manager, getting closer to the web development and design side of things, before partnering with Vectis Holdings as the MD of we3create, a full-service digital agency, serving the Isle of Wight. Later, as the owner of Digital IOW and partner in Isle of Wight SEO, he used a broad range of skills, from web and graphic design to video production, and, of course, SEO. Ben is now Head of SEO for small and medium businesses at Vertical Leap. He enjoys cycling, photography, making sourdough and short films.
Website under-performing but not sure why? Our free review will reveal a list of fixes to get it back on track!
Categories: Data & Analytics
Categories: SEO
Categories: Content Marketing, SEO