We’ve said it in 2009, and we’ll say it again: it keeps amazing us that there are still people using just a robots.txt files to prevent indexing of their site in Google or Bing. As a result, their site shows up in the search engines anyway. You know why it keeps amazing us? Because robots.txt …Read: "Preventing your site from being indexed, the right way"
Crawl directives archives - Page 3 of 3
Recent Crawl directives articles
Ranking in the search engines requires a website with flawless technical SEO. Luckily, the Yoast SEO plugin takes care of (almost) everything on your WordPress site. Still, if you really want to get most out of your website and keep on outranking the competition, some basic knowledge of technical SEO is a must. In this post, …Read: "What is crawlability?"
If you have a big eCommerce site with lots of products, layered navigation can help your users to narrow down their search results. Layered or faceted navigation is an advanced way of filtering, by providing groups or filters for (many) product attributes. In this filtering process, you might create a lot of URLs though. The …Read: "Nofollow layered navigation links?"
We used to consult for sites that monetize, in part, with affiliate links. We normally advised people to redirect affiliate links. In the past, we noticed that there wasn’t a proper script available online that could handle this for us, so we created one to tackle this problem. In this post, I explain how you can …Read: "How to cloak your affiliate links"
Traditionally, you will use a robots.txt file on your server to manage what pages, folders, subdomains or other content search engines will be allowed to crawl. But did you know that there’s also such a thing as the X-Robots-Tag HTTP header? Here, we’ll discuss what the possibilities are and how this might be a better …Read: "Playing with the X-Robots-Tag HTTP header"
In 2015, Google Search Console already started to actively warn webmasters not to block CSS and JS files. In 2014, we told you the same thing: don’t block CSS and JS files. We feel the need to repeat this message now. In this post, we’ll explain why you shouldn’t block these specific files from Googlebot. Why you …Read: "Don’t block CSS and JS files"
A month ago Google introduced its Panda 4.0 update. Over the last few weeks we’ve been able to “fix” a couple of sites that got hit in it. These sites both lost more than 50% of their search traffic in that update. When they returned, their previous position in the search results came back. Sounds too good to be …Read: "Google Panda 4, and blocking your CSS & JS"