Have you recently received a message from Google warning that they cannot read and index your website optimally?
Many websites have noticed an increase in messages from Google Search Console stating that the Googlebot cannot access CSS and JS files for a given URL. The message is a result of Google’s recent initiative to provide greater transparency in search ranking factors.
Below is a picture example of what the message looks like:
The solution is in your robots.txt file. First, you may be asking what is robots.txt?
The robots exclusion protocol (REP), or robots.txt is a text file webmasters create to instruct robots (typically search engine robots) how to crawl and index pages on their website. Robots.txt needs to be placed in the top-level directory of a web server in order to be useful. Example: http://www.example.com/robots.txt
To solve this problem:
1. For WordPress Websites Only. Skip to #2 if you don’t have a WP website. Your current robots.txt most likely begins with:
Remove the “/” at the end so it becomes:
2. Look through the robots.txt file for any of the following lines of code:
If you see any of those lines, remove them. That’s what’s blocking Googlebot from crawling the files it needs to render your site as other users can see it.
3. Add this at the end of your robots.txt:
Have questions or need help? Please contact us.