Crawl directives archives

Recent Crawl directives articles



Playing with the X-Robots-Tag HTTP header

3 January 2017 | 2 Comments | Joost de Valk

Traditionally, you will use a robots.txt file on your server to manage what pages, folders, subdomains or other content search engines will be allowed to crawl. But did you know that there’s also such a thing as the X-Robots-Tag HTTP header? Here, we’ll discuss what the possibilities are and how this might be a better …

Read: "Playing with the X-Robots-Tag HTTP header"


Google Panda 4, and blocking your CSS & JS

19 June 2014 | 79 Comments | Joost de Valk

A month ago Google introduced its Panda 4.0 update. Over the last few weeks we’ve been able to “fix” a couple of sites that got hit in it. These sites both lost more than 50% of their search traffic in that update. When they returned, their previous position in the search results came back. Sounds too good to be …

Read: "Google Panda 4, and blocking your CSS & JS"
noindex a post with meta robots noindex