
Do You See What Google Fetches?
We’ve had two issues this month where our customers’ sites were working perfectly for the visitor but Google Search Console was reporting errors. In one case, the client was attempting to write some of the content utilizing Javascript. In the other case, we identified that the hosting another client was using was redirecting visitors correctly… but not the Googlebot. As a result, Webmasters was continuing to generate 404 errors instead of following the redirect that we’d implemented.
Googlebot is Google’s web crawling bot (sometimes also called a “spider”). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index. We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. From Google: Googlebot
Google fetches, crawls and captures your page content different from a browser. The While Google can crawl scripting, it does not mean that it will always be successful. And just because you test a redirect in your browser and it works, doesn’t mean that the Googlebot is properly redirecting that traffic. It took some dialogue between our team and the hosting company before we figured out what they were doing… and key to finding out was using the Fetch as Google tool in Webmasters.
The Fetch as Google tool allows you to enter a path within your site, see whether or not Google was able to crawl it, and actually see the crawled content as Google does. For our first client, we were able to show that Google was not reading the script as they would have hoped. For our second client, we were able to utilize a different methodology to redirect the Googlebot.
If you see Crawl Errors within Webmasters (in the Health section), use the Fetch as Google to test your redirects and view the content that Google is retrieving.