Google Does Not Take away URLs Excluded By Robots.txt Till URLs Are Individually Reprocessed

News Author


Google Robot Jail Bars 640

Google’s John Mueller posted a clarification on how and when Google processes the elimination requests, or exclusion requests, you make in your robots.txt. The motion just isn’t taken when Google discovers the change in your robots.txt, however relatively after first the robots.txt is processed after which the particular URLs which can be impacted are individually reprocessed by Google Search.

Each should occur. First, Google wants to select up in your adjustments in your robots.tx after which Google must reprocess the person URLs, on a URL-by-URL foundation, for any adjustments in Google Search to occur. This may be quick or not, relying on how briskly the particular URLs are reprocessed by Google.

John Mueller posted this on Mastodon saying, “the factor to remember is that it isn’t about once we see the change within the robots.txt, it is about once we would have wished to reprocess the URL. If I disallow: / as we speak, and Google sees it tomorrow, it would not change all the URLs into robotted tomorrow, it solely begins doing that on a per-URL foundation then. It is like whenever you 404 an entire website, the entire website would not drop out, however as an alternative it occurs on a per-url foundation.”

Here’s a screenshot of that dialog, simply so you will have the context:

Google Mastodon Conversation

In actual fact, many rating algorithms work this fashion, which is why when an replace rolls out, generally it takes about two weeks to totally rollout as a result of that’s how lengthy most essential URLs on the web take for Google to reprocess them. Some URLs on the web might take months to reprocess, as an FYI.

Discussion board dialogue at Twitter.