Table of Contents
Other search engine guidelines like Bing specifically state in their webmaster guidelines:
Most other search engines advise webmasters to not block JS and CSS files, much like Google does.
QUOTE: “The web has moved from plain HTML – as an SEO you can embrace that. Learn from JS devs & share SEO knowledge with them. JS’s not going away.”John Mueller 2017
QUOTE: “Use Chrome’s Inspect Element to check the page’s title and description meta tag, any robots meta tag, and other meta data. Also check that any structured data is available on the rendered page.” Google, 2017
You can read my notes on optimising page titles and meta tags SEO for Google,
As long as Google can render it properly, Google will follow any links presented to it including those with nofollow on it.
QUOTE: “Avoid the AJAX-Crawling scheme on new sites. Consider migrating old sites that use this scheme soon. Remember to remove “meta fragment” tags when migrating. Don’t use a “meta fragment” tag if the “escaped fragment” URL doesn’t serve fully rendered content.” John Mueller, Google 2016
See my notes on optimising internal links for Google and how to get Google to index my website properly.
QUOTE: “Yes, a link is a link, regardless of how it comes to the page. It wouldn’t really work otherwise.” John Mueller, Google 2017
Google will follow links found in JS code on a page, but not pass any signals through them like Pagerank, 2020:
TIP: Use ““feature detection” & “progressive enhancement” techniques.” To Make Your Site Accessible To All Users
QUOTE: Don’t cloak to Googlebot. Use “feature detection” & “progressive enhancement” techniques to make your content available to all users. Avoid redirecting to an “unsupported browser” page. Consider using a polyfill or other safe fallback where needed. The features Googlebot currently doesn’t support include Service Workers, the Fetch API, Promises, and requestAnimationFrame.” John Mueller, Google 2016
How Does Google Handle Content Which Becomes Visible When Clicking A Button?
This is going to depend on IF the content is actually on the page, or is called from another page using some sort of action the user must perform that calls some JS function.
QUOTE: “Take for example Wikipedia on your mobile phone they’ll have different sections and then if you click they expand those sections and there are good usability reasons for doing that so as long as you’re not trying to stuff something in in a hidden way that’s deceptive or trying to you know distort the rankings as long as you’re just doing that for users I think you’ll be in good shape.” Matt Cutts, Google 2013
As long as the text is available on that page when Google crawls it, Google can see the text and use it in relevance calculations. How Google treats this type of content can be different on mobile sites and desktop sites.
QUOTE: “I think we’ve been doing something similar for quite awhile now, where if we can recognize that the content is actually hidden, then we’ll just try to discount it in a little bit. So that we kind of see that it’s still there, but the user doesn’t see it. Therefore, it’s probably not something that’s critical for this page. So that includes, like, the Click to Expand. That includes the tab UIs, where you have all kinds of content hidden away in tabs, those kind of things. So if you want that content really indexed, I’d make sure it’s visible for the users when they go to that page. From our point of view, it’s always a tricky problem when we send a user to a page where we know this content is actually hidden. Because the user will see perhaps the content in the snippet, they’ll click through the page, and say, well, I don’t see where this information is on this page. I feel kind of almost misled to click on this to actually get in there.” John Mueller, Google 2014
Google has historically assigned more ‘weight’ in terms of relevance to pages where the text is completely visible to the user. Many designers use tabs and “read more” links to effectively hide text from being visible on loading a page.
This is probably not ideal from a Google ranking point of view.
Some independent tests have apparently confirmed this:
With Google switching to a mobile-first index, where it will use signals it detects on the mobile version of your site, you can expect text hidden in tabs to carry weight from a ranking point of view:
QUOTE: “So with the mobile first indexing will index the the mobile version of the page. And on the mobile version of the page it can be that you have these kind of tabs and folders and things like that, which we will still treat as normal content on the page even. Even if it is hidden on the initial view. So on desktop it’s something where we think if it’s really important content it should be visible. On mobile it’s it’s a bit trickier obviously. I think if it’s a critical contact it should be visible but that’s more kind of between you and your users in the end.” John Mueller, Google 2017
On the subject of ‘read more’ links in general, some Google spokespeople have chimed in thus:
QUOTE: “oh how I hate those. WHY WHY WHY would a site want to hide their content?” John Mueller, Google 2017
QUOTE: “I’ve never understood the rationale behind that. Is it generating more money? Or why are people doing that?” Gary Illyes, Google 2017
There are some benefits for some sites to use ‘read more‘ links, but I usually shy away from using them on desktop websites.
How Google Treats Content Loaded from Another Page Using JS
If a user needs to click to load content that is pulled from another page, from my own observations, Google will not index that content as a part of the original page IF the content itself is not on the page indexed.
This has been confirmed by Google in a 2016 presentation Google’s John Mueller gave:
QUOTE: “If you have something that’s not even loaded by default that requires an event some kind of interaction with the user then that’s something we know we might not pick up at all because Googlebot isn’t going to go around and click on various parts of your site just to see if anything new happens so if you have something that you have to click on like like this, click to read more and then when someone clicks on this actually there’s an ajax call that pulls in the content and displays it then that’s something we probably won’t be able to use for indexing at all so if this is important content for you again move it to the visible part of the page.” John Mueller, Google 2016
Probably not a good idea:
QUOTE: “You’re hiding links from Googlebot. Personally, I wouldn’t recommend doing that as it makes it harder to properly understand the relevance of your website. Ultimately, that’s your choice, just as it would be our choice to review those practices as potential attempts to hide content & links. What’s the advantage for the user?” John Mueller, Google 2010
TIP: Mobile-First Indexing: Ensure Your Mobile Sites and Desktop Sites Are Equivalent
See my notes on the Google mobile-first index.
“QUOTE: Sometimes things don’t go perfectly during rendering, which may negatively impact search results for your site.” Google
How To Test If Google Can Render Your Pages Properly
QUOTE: “Use Search Console’s Fetch and Render tool to test how Googlebot sees your pages. Note that this tool doesn’t support “#!” or “#” URLs. Avoid using “#” in URLs (outside of “#!”). Googlebot rarely indexes URLs with “#” in them. Use “normal” URLs with path/filename/query-parameters instead, consider using the History API for navigation.” John Mueller,Google 2016
QUOTE: “Rendering on Google Search: http://bit.ly/2wffmpL – web rendering is based on Chrome 41; use feature detection, polyfills, and log errors!” Ilya Grigorik 2017
QUOTE: “Googlebot uses a web rendering service (WRS) that is based on Chrome 41 (M41). Generally, WRS supports the same web platform features and capabilities that the Chrome version it uses — for a full list refer to chromestatus.com, or use the compare function on caniuse.com.” Google, 2020
Will Google Properly Render Single Page Application and Execute Ajax Calls?
Yes. Google will properly render Single Page Applications (SPA):
Will Google Properly Render Hash-Bang URLs (#!)?
And you can now test this using the Fetch as Google rendering tool, too, and comply with guidelines regarding rendering the #! directly:
QUOTE: “Test with Search Console’s Fetch & Render. Compare the results of the #! URL and the escaped URL to see any differences.” Google Webmaster Guidelines via tldrseo.com 2018
If you are still using escaped fragments, make sure to also fetch & render both versions using Fetch As Google to ensure you are not cloaking.
Fetch As Google
Check how your site renders to Google using the Fetch & Render Tool that Google provides as part of Google Search Console.
It is important that what Google (Googlebot) sees is (exactly) what a visitor would see if they visit your site. Blocking Google can sometimes result in a real ranking problem for websites.
Google renders your web pages as part of their web ranking process, so ENSURE you do not block any important elements of your website that impact how a user or Googlebot would see your pages.
If Google has problems accessing particular parts of your website, it will tell you in Search Console.
Tip: The fetch and render tool is also a great way to submit your website and new content to Google for indexing so that the page may appear in Google search results.
Tip: Ensure you use canonical link elements properly.
Check How Google Views Your Desktop Site
Google search console will give you information on this:
Check How Google Views Your Smartphone Site
Google search console will give you information on this:
- Download Chrome 41.
- Visit the website
- Open Chrome Developer Tools.
- Click “Console” and look for errors in red
- Check for blocked resources in Google Search Console
- See how your page renders using “Fetch As Google” tool