I recently had to fix a colossal screw up by some web developers.
They mucked up and cause traffic to 3 massive sites to drop.
Actually, “drop” is putting it lightly.
It took a dive off the cliff and went on a journey to the centre of the earth. But not before being beaten with a club.
And it wasn’t immediately obvious why (no recent changes, nothing seasonal, nothing C-19 related…)
After a quick website crawl I found the problem: they added tags to the code, that they weren’t supposed to.
In fact, these tags never came up in any discussions when the sites were being developed.
So how this managed to happen is beyond me. (it’s not like this was a miscommunication!)
Those tags that caused issues?
1. Canonical tag
2. Meta noindex tag
For the uninitiated, let’s be clear: these tags are not bad.
They serve a purpose and they’re perfectly fine.
But in this case, they were used incorrectly and sent traffic to the crypt.
Canonical tags are used to indicate a preferred version of a page to search engines. Very handy when you’ve got lots of parameter filled URLs (due to filtering, for example).
And it’s used as part of an international SEO strategy where the canonical tag is self-referencing (pointing to the URL it exists on).
But what happened in this case: the devs put the homepage as the canonical tag. ON EVERY SINGLE PAGE.
This was a signal to search engines that the 1000+ URLs on the site were “meh” and that the homepage was the preferred version.
You can see how this would murder organic traffic.
As for the meta noindex tag?
Good Lord.
This was like making the sites die a double death.
This tag is used when you wish to get an indexed URL de-indexed.
No guarantees of course but that’s what this is supposed to do.
What makes this even more wild is that the homepages also had noindex tags!
So you’re telling search engines to ignore all your pages because the homepage is the preferred version. But then you’re saying to deindex the homepage.
THE MIND.
IT BOGGLES.
How do you fix this?
Easy. Remove the offending tags and request a visit from everyone’s friend: Googlebot.
How do you prevent this from happening?
Dish out a mindless flogging to the developers then invest in a tool like Little Warden.
It scans your site and reports back on any changes to things like robots.txt, tags, SSL certs and so much more.
You can ignore the alerts for changes that are legit.
But sometimes things change when they shouldn’t – and that’s where these alerts are super useful.
I mean you should really be investing in quality control before any release. But humans are flawed and this is why tools like Little Warden exists.
This isn’t a sponsored post. I get nothing from Little Warden. In fact, I got some shade on twitter for even asking about alternatives. (https://twitter.com/jaaved/status/1296394290265051138)
My point here is this:
– when traffic drops, you should investigate straight away.
– a process of elimination is a great way to get to the root cause quicker.
– make notes in Google Analytics for major events, releases and even things like holidays – it helps identify trends quicker.
– beat your devs around the head when they muck up (then ply them with whatever makes them happy).
– just kidding, don’t beat anyone!
Got a traffic drop that needs CPR? hit me up! Always happy to help!