Crawl directives archives

Recent Crawl directives articles

WordPress robots.txt: Best-practice example for SEO

7 November 2019 Jono Alderson

Your robots.txt file is a powerful tool when working on a website’s SEO – but you should handle it with care. It allows you to deny search engines access to different files and folders, but often that’s not the best way to optimize your site. Here, we’ll explain how we think site owners should use their …

Read: "WordPress robots.txt: Best-practice example for SEO"
WordPress robots.txt: Best-practice example for SEO





Closing a spider trap: fix crawl inefficiencies

12 October 2017 | 4 Comments Joost de Valk

Quite some time ago, we made a few changes to how yoast.com is run as a shop and how it’s hosted. In that process, we accidentally removed our robots.txt file and caused a so-called spider trap to open. In this post, I’ll show you what a spider trap is, why it’s problematic and how you …

Read: "Closing a spider trap: fix crawl inefficiencies"
Closing a spider trap: fix crawl inefficiencies


Google Panda 4, and blocking your CSS & JS

A month ago Google introduced its Panda 4.0 update. Over the last few weeks we’ve been able to “fix” a couple of sites that got hit in it. These sites both lost more than 50% of their search traffic in that update. When they returned, their previous position in the search results came back. Sounds too good to be …

Read: "Google Panda 4, and blocking your CSS & JS"
Google Panda 4, and blocking your CSS & JS