The author has over 20 years of experience in SEO and website development.
Here are some things to check:
- Have you used the URL Inspection tool to see exactly how Google sees and renders your website’s content?
- Are unique and descriptive title elements used for every page?
- Are helpful meta descriptions used for every page?
- Are meaningful HTTP status codes used and are they utilized to tell Googlebot if a page can’t be crawled or indexed?
- Is the History API used instead of fragments to ensure that Googlebot can parse and extract your URLs?
- Is the rel=”canonical” link tag used and properly injected?
Use the robots meta tags carefully
Here are some things to check:
- Does the page use Robots meta tags?
- Is the website using the correct syntax for the robots meta tag, i.e., <meta name=”robots” content=”noindex, nofollow”>?
- Is there a possibility that the page should be indexed? If so, is the website avoiding using the noindex tag in the original page code?
Use long-lived caching
- Does the website use Long-lived caching?
- Has the website followed the long-lived caching strategies outlined in the web.dev guide?
Use structured data
- Does the website use Structured data?
- Has the website tested its implementation to avoid any issues?
Test with URL Inspection Tool
- Go to Google Search Console.
- Select the property containing the URL you want to inspect.
- Click “URL Inspection” in the left-hand menu.
- Enter the URL you want to inspect in the search bar and hit “Enter”.
Test with Mobile-Friendly Test
- Go to Google’s Mobile-Friendly Test.
- Enter the URL you want to test in the search bar and hit “Test URL”.
Test with Google Chrome DevTools
- Open the web page you want to test in Google Chrome.
- Right-click anywhere on the page and select “Inspect” from the context menu.
- In the DevTools window that opens, click the “Console” tab.
Other search engine guidelines like Bing specifically state in their webmaster guidelines:
Most other search engines advise webmasters to not block JS and CSS files, much like Google does.
Frequently asked questions
QUOTE: “The web has moved from plain HTML – as an SEO you can embrace that. Learn from JS devs & share SEO knowledge with them. JS’s not going away.”John Mueller 2017
QUOTE: “Use Chrome’s Inspect Element to check the page’s title and description meta tag, any robots meta tag, and other meta data. Also check that any structured data is available on the rendered page.” Google, 2017
You can read my notes on optimising page titles and meta tags SEO for Google.
As long as Google can render it properly, Google will follow any links presented to it including those with nofollow on it.
QUOTE: “Avoid the AJAX-Crawling scheme on new sites. Consider migrating old sites that use this scheme soon. Remember to remove “meta fragment” tags when migrating. Don’t use a “meta fragment” tag if the “escaped fragment” URL doesn’t serve fully rendered content.” John Mueller, Google 2016
See my notes on optimising internal links for Google and how to get Google to index my website properly.
QUOTE: “Yes, a link is a link, regardless of how it comes to the page. It wouldn’t really work otherwise.” John Mueller, Google 2017
Google will follow links found in JS code on a page, but not pass any signals through them like Pagerank, 2020:
TIP: Use “feature detection” & “progressive enhancement” techniques to make your site accessible to all users
QUOTE: Don’t cloak to Googlebot. Use “feature detection” & “progressive enhancement” techniques to make your content available to all users. Avoid redirecting to an “unsupported browser” page. Consider using a polyfill or other safe fallback where needed. The features Googlebot currently doesn’t support include Service Workers, the Fetch API, Promises, and requestAnimationFrame.” John Mueller, Google 2016
How Does Google Handle Content Which Becomes Visible When Clicking A Button?
This is going to depend on IF the content is actually on the page, or is called from another page using some sort of action the user must perform that calls some JS function.
QUOTE: “Take for example Wikipedia on your mobile phone they’ll have different sections and then if you click they expand those sections and there are good usability reasons for doing that so as long as you’re not trying to stuff something in in a hidden way that’s deceptive or trying to you know distort the rankings as long as you’re just doing that for users I think you’ll be in good shape.” Matt Cutts, Google 2013
As long as the text is available on that page when Google crawls it, Google can see the text and use it in relevance calculations. How Google treats this type of content can be different on mobile sites and desktop sites.
QUOTE: “I think we’ve been doing something similar for quite awhile now, where if we can recognize that the content is actually hidden, then we’ll just try to discount it in a little bit. So that we kind of see that it’s still there, but the user doesn’t see it. Therefore, it’s probably not something that’s critical for this page. So that includes, like, the Click to Expand. That includes the tab UIs, where you have all kinds of content hidden away in tabs, those kind of things. So if you want that content really indexed, I’d make sure it’s visible for the users when they go to that page. From our point of view, it’s always a tricky problem when we send a user to a page where we know this content is actually hidden. Because the user will see perhaps the content in the snippet, they’ll click through the page, and say, well, I don’t see where this information is on this page. I feel kind of almost misled to click on this to actually get in there.” John Mueller, Google 2014
Google has historically assigned more ‘weight’ in terms of relevance to pages where the text is completely visible to the user. Many designers use tabs and “read more” links to effectively hide text from being visible on loading a page.
This is probably not ideal from a Google ranking point of view.
Some independent tests have apparently confirmed this:
With Google switching to a mobile-first index, where it will use signals it detects on the mobile version of your site, you can expect text hidden in tabs to carry weight from a ranking point of view:
QUOTE: “So with the mobile first indexing will index the the mobile version of the page. And on the mobile version of the page it can be that you have these kind of tabs and folders and things like that, which we will still treat as normal content on the page even. Even if it is hidden on the initial view. So on desktop it’s something where we think if it’s really important content it should be visible. On mobile it’s it’s a bit trickier obviously. I think if it’s a critical contact it should be visible but that’s more kind of between you and your users in the end.” John Mueller, Google 2017
On the subject of “read more” links in general, some Google spokespeople have chimed in thus:
QUOTE: “oh how I hate those. WHY WHY WHY would a site want to hide their content?” John Mueller, Google 2017
QUOTE: “I’ve never understood the rationale behind that. Is it generating more money? Or why are people doing that?” Gary Illyes, Google 2017
There are some benefits for some sites to use ‘read more’ links, but I usually shy away from using them on desktop websites.
How Google Treats Content Loaded from Another Page Using JS
If a user needs to click to load content that is pulled from another page, from my own observations, Google will not index that content as a part of the original page IF the content itself is not on the page indexed.
This has been confirmed by Google in a 2016 presentation Google’s John Mueller gave:
QUOTE: “If you have something that’s not even loaded by default that requires an event some kind of interaction with the user then that’s something we know we might not pick up at all because Googlebot isn’t going to go around and click on various parts of your site just to see if anything new happens so if you have something that you have to click on like like this, click to read more and then when someone clicks on this actually there’s an ajax call that pulls in the content and displays it then that’s something we probably won’t be able to use for indexing at all so if this is important content for you again move it to the visible part of the page.” John Mueller, Google 2016
Probably not a good idea:
QUOTE: “You’re hiding links from Googlebot. Personally, I wouldn’t recommend doing that as it makes it harder to properly understand the relevance of your website. Ultimately, that’s your choice, just as it would be our choice to review those practices as potential attempts to hide content & links. What’s the advantage for the user?” John Mueller, Google 2010
TIP: Mobile-First Indexing: Ensure your mobile sites and desktop sites are equivalent
See my notes on the Google mobile-first index.
“QUOTE: Sometimes things don’t go perfectly during rendering, which may negatively impact search results for your site.” Google
How To Test If Google Can Render Your Pages Properly
QUOTE: “Use Search Console’s Fetch and Render tool to test how Googlebot sees your pages. Note that this tool doesn’t support “#!” or “#” URLs. Avoid using “#” in URLs (outside of “#!”). Googlebot rarely indexes URLs with “#” in them. Use “normal” URLs with path/filename/query-parameters instead, consider using the History API for navigation.” John Mueller,Google 2016
QUOTE: “Rendering on Google Search: http://bit.ly/2wffmpL – web rendering is based on Chrome 41; use feature detection, polyfills, and log errors!” Ilya Grigorik 2017
QUOTE: “Googlebot uses a web rendering service (WRS) that is based on Chrome 41 (M41). Generally, WRS supports the same web platform features and capabilities that the Chrome version it uses — for a full list refer to chromestatus.com, or use the compare function on caniuse.com.” Google, 2020
Will Google Properly Render Single Page Application and Execute Ajax Calls?
Yes. Google will properly render Single Page Applications (SPA):
Will Google Properly Render Hash-Bang URLs (#!)?