Crawl directives

There are multiple ways to tell search engines how to behave on your site. These are called “crawl directives”. They allow you to:

  • tell a search engine to not crawl a page at all;
  • not to use a page in its index after it has crawled it;
  • whether to follow or not to follow links on that page;
  • a lot of “minor” directives.

We write a lot about these crawl directives as they are a very important weapon in an SEO’s arsenal. We try to keep these articles up to date as standards and best practices evolve.


Beginners level

SEO basics: What is crawlability? »

What is crawlability, and why is it important for SEO? And in what ways could you block Google from crawling (parts of your) site? Read on!

Expert level

The ultimate guide to robots.txt »

The robots.txt file is a file you can use to tell search engines where they can and cannot go on your site. Learn how to use it to your advantage!


Must read articles about Crawl directives


Recent Crawl directives articles

Crawl budget optimization can help you out if Google doesn't crawl enough pages on your site. Learn whether and how you should do this.

What do you know about bot traffic? Do you know that it affects the environment too? Read on to learn why you should care about bot traffic!

The robots.txt file is a file you can use to tell search engines where they can and cannot go on your site. Learn how to use it to your advantage!

Browse through our Crawl directives content posts. »