txt file is then parsed and will instruct the robot concerning which pages are not to generally be crawled. As being a search engine crawler may well hold a cached copy of the file, it could occasionally crawl internet pages a webmaster does not desire to crawl. Webpages typically prevented from being crawled include things like login-certain pages