It can happen to anyone: You’re working on your site, fiddling on some posts here and there, and hit update when you’re done. After a while, you check back on how a post is doing and, to your dismay, it disappeared completely from the search engines! It turns out you’ve accidentally set a post or …Read: "Help, I’ve accidentally noindexed a post. What to do?"
Want to learn about Crawl directives?
Take our Technical SEO training!
- Part of our SEO academy Premium subscription!
- Which technical SEO errors hurt your site?
- Solve them and climb the rankings!
- Improve your site speed on the go
Or just read some posts about Crawl directives
The robots.txt file is a file you can use to tell search engines where they can and cannot go on your site. Learn how to use it to your advantage!
Must read articles about Crawl directives
Trying to prevent indexing of your site by using robots.txt is a no-go, use X-Robots-Tag or a meta robots tag instead! Here's why.
The canonical URL allows you to tell search engines that certain similar URLs are actually one and the same. Learn how to use rel=canonical!
Want to keep a page out of the search results? Ask yourself if it should be on your site anyways. If it should, use a robots meta tag to prevent it from being indexed.
Recent Crawl directives articles
Your robots.txt file is a powerful tool when you’re working on a website’s SEO – but it should be handled with care. It allows you to deny search engines access to different files and folders, but often that’s not the best way to optimize your site. Here, we’ll explain how we think webmasters should use their …Read: "WordPress robots.txt: Best-practice example for SEO"
Sometimes Google does announcements about new features and we go “huh, why did they do that?” This week we had one of those. Google introduced a new set of robots meta controls, that allows sites to limit the display of their snippets in the search results. There is a reason for that, but they buried …Read: "Robots meta changes for Google"
An SEO Basics post about technical SEO might seem like a contradiction in terms. Nevertheless, some basic knowledge about the more technical side of SEO can mean the difference between a high ranking site and a site that doesn’t rank at all. Technical SEO isn’t easy, but here we’ll explain – in layman’s language – …Read: "What’s technical SEO? 8 technical aspects everyone should know"
The robots.txt file is a file you can use to tell search engines where they can and cannot go on your site. Learn how to use it to your advantage!Read: "The ultimate guide to robots.txt"
If you want to keep your page out of the search results, there are a number of things you can do. Most options aren’t hard and you can implement these without a ton of technical knowledge. If you can check a box, your content management system will probably have an option for that. Or allows …Read: "How to keep your page out of the search results"
This guide discusses what hreflang is, what it is for and gives in-depth information on how to implement it for your multilingual websites.Read: "hreflang: the ultimate guide"
Some of the pages of your site serve a purpose, but that purpose isn’t ranking in search engines or even getting traffic to your site. These pages need to be there, as glue for other pages or simply because regulations require them to be accessible on your website. If you regularly read our blog, you’ll know …Read: "Why noindex a page or nofollow a link?"
Why should you block your internal search result pages for Google? Well, how would you feel if you are in dire need for the answer to your search query and end up on the internal search pages of a certain website? That’s one crappy experience. Google thinks so too. And prefers you not to have these internal …Read: "Block your site’s search result pages"
With the meta robots tag you can control what search engine spiders do on your site. Learn what it does right here!Read: "The ultimate guide to the meta robots tag"
There are multiple ways to tell search engines how to behave on your site. These are called “crawl directives”. They allow you to:
- tell a search engine to not crawl a page at all;
- not to use a page in its index after it has crawled it;
- whether to follow or not to follow links on that page;
- a lot of “minor” directives.
We write a lot about these crawl directives as they are a very important weapon in an SEO’s arsenal. We try to keep these articles up to date as standards and best practices evolve.