Crawl directives archives

Playing with the X-Robots-Tag HTTP header

3 January 2017 by Joost de Valk - 2 Comments

Traditionally, you will use a robots.txt file on your server to manage what pages, folders, subdomains or other content search engines will be allowed to crawl. But did you know that there’s also such a thing as the X-Robots-Tag HTTP header? Here, we’ll discuss what the possibilities are and how this might be a better …

Read: "Playing with the X-Robots-Tag HTTP header"





WordPress robots.txt example for great SEO

26 April 2016 by Joost de Valk

The robots.txt file is a very powerful file if you’re working on a site’s SEO. At the same time, it also has to be used with care. It allows you to deny search engines access to certain files and folders, but that’s very often not what you want to do. Over the years, especially Google changed …

Read: "WordPress robots.txt example for great SEO"
noindex a post with meta robots noindex

Google Panda 4, and blocking your CSS & JS

19 June 2014 by Joost de Valk - 79 Comments

A month ago Google introduced its Panda 4.0 update. Over the last few weeks we’ve been able to “fix” a couple of sites that got hit in it. These sites both lost more than 50% of their search traffic in that update. When they returned, their previous position in the search results came back. Sounds too good to be …

Read: "Google Panda 4, and blocking your CSS & JS"
noindex a post with meta robots noindex



Browse other categories