Enable Robot.txt for Blogger Blog

Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt is by no means mandatory for search engines but generally search engines obey what they are asked not to do. A blogger blog must have a Robot.txt file. This Robot.txt file tells a web crawler which pages are allowed to crawl, which are not. It helps you removing the duplicate contents from the search results. You can prevent Search engines from not indexing the unwanted pages using this Robot.txt file.

Understanding Robot.txt

A simple Robot.txt file consists of following elements

  • User Agent: These user agents are nothing but search engine crawlers
  • Allow: This will allow the pages you would like to get indexed
  • Disallow:This will remove results from search engines for those specific pages, i.e. crawlers will not crawl those pages
  • Sitemap: URL of the Sitemap of the website
Steps for enabling Robot.txt for Blogger Blog

Editing Robot.txt for Blogger Blog

 

  • Go to Blogger Dashboard > Your Blog > Settings > Search Preferences
  • Here you will see Crawlers and indexing. Enable custom robots.txt content.
  • After enabling custom robots.txt content, a blank field will appear
  • Enter the following code in the blank field

User-agent: Mediapartners-Google
Disallow:

User-agent: *
Disallow: /search
Allow: /
Sitemap: http://YOUR_BLOG_URL.blogspot.com/feeds/posts/default?orderby=UPDATED

 

  • Change “YOUR_BLOG_URL” with your original one & Save Changes
In this way you can enable Robot.txt for your blogger blog. The above code will allow Google web crawler to crawl everything & others to crawl everything except Search Pages (Label wise).