Get Help With Your SEO Today

Free SEO Consultation

SEO Tutorial For Beginners in 2018

Blog subscribers

Disclosure: “This article is personal opinion of research based on my experience of almost 20 years. There is no third party advertising on this page or monetised links of any sort. External links to third party sites are moderated by me. Disclaimer.” Shaun Anderson, Hobo

What is SEO in 2018?

Search Engine Optimisation (SEO) in 2018 is a technical, analytical and creative process to improve the visibility of a website in search engines. The primary function of SEO is to drive more unpaid useful traffic to a site that converts into sales.

The free SEO tips you will read on this page will help you create a successful SEO friendly website yourself.

I have nearly 20 years experience making websites rank in Google. If you need optimisation services – see my SEO consultation service, website migration service,  SEO audit or small business SEO services.

Table Of Contents


TL;DR –  What Really Matters if you do SEO in 2018?

QUOTE: “There aren’t any quick magical tricks that an SEO will provide so that your site ranks number one. It’s important to note that any SEO potential is only as high as the quality of your business or website so successful SEO helps your website put your best foot forward.” Maile Ohye, Google 2017

In my opinion, here are the things that really matter if you do SEO in 2018, and I would wager that both white-hats and black-hats would agree on these:

  • Don’t block your site
  • Don’t confuse or annoy a website visitor
  • Don’t block Google from crawling resources on your site or rendering specific elements on your page
  • Have a responsive design that works on mobile and desktop
  • Be situated local to your target customer
  • Geotarget your site in Search Console AKA Google Webmaster Tools (unless you have a country specific domain)
  • Put your keyword phrase at least once in the Page Title Element
  • Put your keyword phrase at least once in the Main Content on the page (at least once in page copy (in Paragraph tags)
  • Avoid keyword stuffing main content
  • Optimise your meta description to have a clickable useful SERP snippet
  • Ensure the Main Content of the page is high-quality and written by a professional (MOST OF YOUR EFFORT GOES HERE – If your content is not being shared organically, you may have a content quality problem)
  • Ensure the keywords you want to rank for are present on your site. The quality of competition for these rankings will determine how much effort you need to put in
  • Use synonyms and common co-occurring words throughout your page copy
  • Add value to pages with ordered lists, images, videos and tables
  • Optimise for increased ‘user intent’ satisfaction (e.g. increased dwell times on a page or site)
  • Keep important content on the site updated a few times a year
  • Trim outdated content from your site
  • Avoid publishing and indexing content-poor pages (especially affiliate sites)
  • Aim for a good ratio of ‘useful’ user-centred text to affiliate links
  • Disclose page modification dates in a visible format
  • Do not push the main content down a page unnecessarily with ads etc
  • Link to related content on your site with useful and very relevant anchor text
  • Use a simple navigation system on your site
  • Create pages to basic meet W3C recommendations on accessible HTML (W3c) (H1, ALT text etc)
  • Create pages to meet basic usability best practices (Nielsen) – Pay attention to what ‘annoys’ website visitors
  • Create pages where the main content of the page is given priority, and remove annoying ads and pop-ups (especially on mobile)
  • Develop websites that meet Google technical recommendations on (for example) canonicalization, internationalisation and pagination best practices
  • Ensure Fast delivery of web pages on mobile and desktop
  • Provide clear disclosure of affiliate ads and non-intrusive advertising. Clear disclosure of everything, in fact, if you are focused on quality in all areas.
  • Add high-quality and relevant external links (depending if the query is informational)
  • If you can, include the Keyword phrase in a short URL
  • Use the Keyword phrase in internal anchor text pointing to this page (at least once)
  • Use Headings, Lists and HTML Tables on pages if you show data
  • Ensure on average all ‘Main Content’ blocks of all pages on the site are high-quality
  • Ensure old SEO practices are cleaned up and removed from site
  • Avoid implementing old-school SEO practices in new campaigns (Google is better at detecting sites with little value-add)
  • Consider disavowing any obvious low-quality links from previous SEO efforts
  • Provide Clear website domain ownership, copyright and contact details on the site
  • Share your content on the major social networks when it is good enough
  • Get backlinks from real websites with real domain trust and authority
  • Convert visitors (whatever that ‘conversion’ may be)
  • Monitor VERY CAREFULLY any user-generated content on your site, because it is rated as part of your own site content
  • Pay attention to site security issues (implement https, for example)

What really matters in SEO in 2018 is what you prioritise today so that in 3-6 months you can see improvements in the quality of your organic traffic. I lay this out in my comprehensive SEO audits (see an example SEO audit here).

QUOTE: “In most cases the SEO will need four months to a year to help your business first implement improvements and then see potential benefit.” Maile Ohye, Google 2017

You will need to meet Google’s guidelines and recommendations in every area in 2018 (and, if you are like me with this site, you eventually avoid bending any rule and just focus on serving the user useful and up-to-date content).

Read on for a more meandering look at modern SEO in 2018.

What Are The Best SEO Tools for SEO in 2018?

You can use tools like SEMRush (specifically the SEMRush Audit Tool), SiteBulb CrawlerDeepCrawlScreaming Frog or SEO Powersuite Website Auditor to check for SEO issues.

If you are not technically minded, we can analyse and optimise your website for you as part of our fixed price SEO service.

An Introduction to SEO

QUOTE: “Search engine optimization is often about making small modifications to parts of your website. When viewed individually, these changes might seem like incremental improvements, but when combined with other optimizations, they could have a noticeable impact on your site’s user experience and performance in organic search results.” Google Starter Guide, 2008

This article is a beginner’s guide to effective white hat SEO.

I deliberately steer clear of techniques that might be ‘grey hat’, as what is grey today is often ‘black hat’ tomorrow, or ‘shady practices’, as far as Google is concerned.

QUOTE: “Shady practices on your website […] result in a reduction in search rankings” Maile Ohye, Google 2017

No one-page guide can explore this complex topic in full. What you’ll read here are answers to questions I had when I was starting out in this field 20 years ago, now ‘corroborated with confirmations from Google.

QUOTE: “My strongest advice when working with an SEO is to request if they corroborate their recommendation with a documented statement from Google” Maile Ohye, Google 2017

The ‘Rules.’

Google insists webmasters adhere to their ‘rules’ and aims to reward sites with high-quality content and remarkable ‘white hat’ web marketing techniques with high rankings.

QUOTE: “Creating compelling and useful content will likely influence your website more than any of the other factors discussed here.” Google SEO Starter Guide, 2017

Conversely, it also needs to penalise websites that manage to rank in Google by breaking these rules.

QUOTE: “Basically, we figured that site is trying to game our systems, and unfortunately, successfully. So we will adjust the rank. We will push the site back just to make sure that it’s not working anymore.” Gary Illyes, Google 2016

These rules are not ‘laws’, but ‘guidelines’, for ranking in Google; lay down by Google. You should note, however, that some methods of ranking in Google are, in fact, illegal. Hacking, for instance, is illegal in the UK, US, Canada, and Australia.

You can choose to follow and abide by these rules, bend them or ignore them – all with different levels of success (and levels of retribution, from Google’s web spam team).

White hats do it by the ‘rules’; black hats ignore the ‘rules’.

What you read in this article is perfectly within the laws and also within the guidelines and will help you increase the traffic to your website through organic, or natural search engine results pages (SERPs).


There are a lot of definitions of SEO (spelled Search engine optimisation in the UK, Australia and New Zealand, or search engine optimization in the United States and Canada) but organic SEO in 2018 is still mostly about getting free traffic from Google, the most popular search engine in the world (and almost the only game in town in the UK in 2018):

Search Engine Market Share Worldwide 2017-12-22 00.32.49


The art of web SEO lies in understanding how people search for things and understanding what type of results Google wants to (or will) display to its users. It’s about putting a lot of things together to look for opportunity.

A good optimiser has an understanding of how search engines like Google generate their natural SERPs to satisfy users’ navigationalinformational and transactional keyword queries.

QUOTE: “One piece of advice I tend to give people is to aim for a niche within your niche where you can be the best by a long stretch. Find something where people explicitly seek YOU out, not just “cheap X” (where even if you rank, chances are they’ll click around to other sites anyway).” John Mueller, Google 2018

Risk Management

A good search engine marketer has a good understanding of the short term and long term risks involved in optimising rankings in search engines, and an understanding of the type of content and sites Google (especially) WANTS to return in its natural SERPs.

The aim of any campaign is more visibility in search engines and this would be a simple process if it were not for the many pitfalls.

There are rules to be followed or ignored, risks to take, gains to make, and battles to be won or lost.

Free Traffic

QUOTE: “Google is “the biggest kingmaker on this Earth.” Amit Singhal, Google, 2010

A Mountain View spokesman once called the search engine ‘kingmakers‘, and that’s no lie.

Ranking high in Google is VERY VALUABLE – it’s effectively ‘free advertising’ on the best advertising space in the world.

Traffic from Google natural listings is STILL the most valuable organic traffic to a website in the world, and it can make or break an online business.

The state of play, in 2018, is that you can STILL generate highly targeted leads, for FREE, just by improving your website and optimising your content to be as relevant as possible for a buyer looking for your company, product or service.

As you can imagine, there’s a LOT of competition now for that free traffic – even from Google (!) in some niches.

You shouldn’t compete with Google. You should focus on competing with your competitors.

The Process

The process that is SEO can be practised, successfully, in a bedroom or a workplace, but it has traditionally always involved mastering many skills as they arose including diverse marketing technologies including but not limited to:

  • Website design
  • Accessibility
  • Usability
  • User experience
  • Website development
  • PHP, HTML, CSS, etc.
  • Server management
  • Domain management
  • Copywriting
  • Spreadsheets
  • Backlink analysis
  • Keyword research
  • Social media promotion
  • Software development
  • Analytics and data analysis
  • Information architecture
  • Research
  • Log Analysis
  • Looking at Google for hours on end

It takes a lot, in 2018, to rank on merit a page in Google in competitive niches, due to the amount of competition for those top spots.

User Experience

QUOTE: “At Google we are aiming to provide a great user experience on any device, we’re making a big push to ensure the search results we deliver reflect this principle.” Google 2014

The big stick Google is hitting every webmaster with (at the moment, and for the foreseeable future) is the ‘USER EXPERIENCE‘ stick.

There is no single ‘user experience’ ranking factor, we have been told, however poor user experience clearly does not lead to high rankings in Google.

QUOTE: “I don’t think we even see what people are doing on your website if they’re filling out forms or not if they’re converting to actually buying something so if we can’t really see that then that’s not something that we’d be able to take into account anyway. So from my point of view that’s not something I’d really treat as a ranking factor. Of course if people are going to your website and they’re filling out forms or signing up for your service or for a newsletter then generally that’s a sign that you’re doing the right things.”. John Mueller, Google 2015

Note what Google labels ‘user experience‘ may differ from how others define it.

Take for instance a slow page load time, which is a poor user experience:

QUOTE: “We do say we have a small factor in there for pages that are really slow to load where we take that into account.” John Mueller,Google, 2015

or sites that don’t have much content “above-the-fold”:

QUOTE: “So sites that don’t have much content “above-the-fold” can be affected by this change. If you click on a website and the part of the website you see first either doesn’t have a lot of visible content above-the-fold or dedicates a large fraction of the site’s initial screen real estate to ads, that’s not a very good user experience.” Google 2012

These are user experience issues Google penalises for, but as another Google spokesperson pointed out:

QUOTE: “Rankings is a nuanced process and there is over 200 signals.” Maile Ohye, Google 2010

If you expect to rank in Google in 2018, you’d better have a quality offering, not based entirely on manipulation, or old school tactics.

Is a visit to your site a good user experience? Is it performing its task better than the competition?

If not – beware the manual ‘Quality Raters’ and beware the Google Panda/Site Quality algorithms that are looking to demote sites based on what we can easily point to as poor user experience signals and unsatisfying content when it is presented to Google’s users.

Google raising the ‘quality bar’, year on year, ensures a higher level of quality in online marketing in general (above the very low-quality we’ve seen over the last years).

Success in organic marketing in 2018 involves investment in higher quality on-page content, better website architecture, improved usability, intelligent conversion to optimisation balance, and ‘legitimate’ internet marketing techniques.

If you don’t take that route, you’ll find yourself chased down by Google’s algorithms at some point in the coming year.

This ‘what is SEO‘ guide (and this entire website) is not about churn and burn type of Google SEO (called webspam to Google) as that is too risky to deploy on a real business website in 2018.

QUOTE: “Blackhat SEO fads: like walking into a dark alley, packed with used car salesmen, who won’t show you their cars.” Matt Cutts, Google 2014

What Is A Successful Strategy?

Get relevant. Get trusted. Get Popular. Help a visitor complete their task.

SEO is no longer just about manipulation in 2018.

It’s about adding quality and often useful content to your website that together meet a PURPOSE that delivers USER SATISFACTION over the longer term.

If you are serious about getting more free traffic from search engines, get ready to invest time and effort in your website and online marketing.

Quality Signals

QUOTE: “Another problem we were having was an issue with quality and this was particularly bad (we think of it as around 2008 2009 to 2011) we were getting lots of complaints about low-quality content and they were right. We were seeing the same low-quality thing but our relevance metrics kept going up and that’s because the low-quality pages can be very relevant. This is basically the definition of a content farm in our in our vision of the world so we thought we were doing great our numbers were saying we were doing great and we were delivering a terrible user experience and turned out we weren’t measuring what we needed to so what we ended up doing was defining an explicit quality metric which got directly at the issue of quality it’s not the same as relevance …. and it enabled us to develop quality related signals separate from relevant signals and really improve them independently so when the metrics missed something what ranking engineers need to do is fix the rating guidelines… or develop new metrics.” SMX West 2016 – How Google Works: A Google Ranking Engineer’s Story (VIDEO)

In short, it was too easy (for some) to manipulate Google’s rankings at the beginning of the decade. If you had enough ‘domain authority’ you could use ‘thin content’ to rank for anything. This is the definition of a ‘content farm’.

Web spammers often used ‘unnatural‘ backlinks to build fake ‘domain authority‘ to rank this ‘thin’ content. I know I did in the past.

So Google raised the bar.

Google decided to rank HIGH-QUALITY documents in its results and force those who wish to rank high to invest in higher-quality content or a great customer experience that creates buzz and attracts editorial links from reputable websites.

These high-quality signals are in some way based on Google being able to detect a certain amount of attention and effort put into your site and Google monitoring over time how users interact with your site. These type of quality signals are much harder to game than they were in 2011.

Essentially, the ‘agreement’ with Google is if you’re willing to add a lot of great content to your website and create a buzz about your company, Google will rank you high above others who do not invest in this endeavour.

QUOTE: “high quality content is something I’d focus on. I see lots and lots of SEO blogs talking about user experience, which I think is a great thing to focus on as well. Because that essentially kind of focuses on what we are trying to look at as well. We want to rank content that is useful for (Google users) and if your content is really useful for them, then we want to rank it.” John Mueller, Google 2016

If you try to manipulate Google, it will penalise you for a period, and often until you fix the offending issue – which we know can LAST YEARS.

If you are a real business who intends to build a brand online and rely on organic traffic – you can’t use black hat methods. Full stop. It can take a LONG time for a site to recover from using black hat SEO tactics and fixing the problems will not necessarily bring organic traffic back as it was before a penalty.

QUOTE: “Cleaning up these kinds of link issue can take considerable time to be reflected by our algorithms (we don’t have a specific time in mind, but the mentioned 6-12 months is probably on the safe side). In general, you won’t see a jump up in rankings afterwards because our algorithms attempt to ignore the links already, but it makes it easier for us to trust the site later on.” John Mueller, Google, 2018

Recovery from a Google penalty is a ‘new growth’ process as much as it is a ‘clean-up’ process.

Google Rankings Always Change

Google Rankings Are In Constant Ever-Flux

QUOTE: “Things can always change in search.” John Mueller, Google 2017

Graph: Typical Google rankings 'ever flux'

Put simply – It’s Google’s job to MAKE MANIPULATING SERPs HARD. Its HARD to get to number 1 in Google for competitive keyword phrases.

So – the people behind the algorithms keep ‘moving the goalposts’, modifying the ‘rules’ and raising ‘quality standards’ for pages that compete for top ten rankings.

In 2018 – we have ever-flux in the SERPs – and that seems to suit Google and keep everybody guessing.

Fluctuating Rankings

QUOTE: “nobody will always see ranking number three for your website for those queries. It’s always fluctuating.” John Mueller, Google, 2015

This flux is not necessarily something to do with a problem per se and 

QUOTE: “that’s just a sign that our algorithms are fluctuating with the rankings.” John Mueller, Google 2015

Fluctuating upwards could be a good sign as he mentioned: “maybe this is really relevant for the first page, or maybe not.” – then again – the converse is true, one would expect.

 He says “a little bit of a push to make your website a little bit better, so that the algorithms can clearly say, yes, this really belongs on the first page.” which I thought was an interesting turn of phrase.  ‘First page’, rather than ‘number 1’.

Google is very secretive about its ‘secret sauce’ and offers sometimes helpful and sometimes vague advice – and some say offers misdirection – about how to get more from valuable traffic from Google.

Google is on record as saying the engine is intent on ‘frustrating’ search engine optimisers attempts to improve the amount of high-quality traffic to a website – at least (but not limited to) – using low-quality strategies classed as web spam.

QUOTE: “If you want to stop spam, the most straight forward way to do it is to deny people money because they care about the money and that should be their end goal. But if you really want to stop spam, it is a little bit mean, but what you want to do, is sort of break their spirits. There are lots of Google algorithms specifically designed to frustrate spammers. Some of the things we do is give people a hint their site will drop and then a week or two later, their site actually does drop. So they get a little bit more frustrated. So hopefully, and we’ve seen this happen, people step away from the dark side and say, you know what, that was so much pain and anguish and frustration, let’s just stay on the high road from now on.” Matt Cutts, Google 2013

At its core, Google search engine optimisation is still about KEYWORDS and LINKS. It’s about RELEVANCEREPUTATION and TRUST. It is about QUALITY OF CONTENTVISITOR SATISFACTION & INCREASED USER ENGAGEMENT. It is about users seeking your website out and completing the task they have.

A Good USER EXPERIENCE is a key to winning – and keeping – the highest rankings in many verticals.

Relevance, Authority & Trust

QUOTE: Know that ‘content’ and relevance’ are still primary.” Maile Ohye, Google 2010

Web page optimisation is in 2018 STILL about making a web page relevant and trusted enough to rank for any given search query.

It’s about ranking for valuable keywords for the long term, on merit. You can play by ‘white hat’ rules lay down by Google, and aim to build this Authority and Trust naturally, over time, or you can choose to ignore the rules and go full time ‘black hat’.

MOST SEO tactics still work, for some time, on some level, depending on who’s doing them, and how the campaign is deployed.

Whichever route you take, know that if Google catches you trying to modify your rank using overtly obvious and manipulative methods, then they will class you a web spammer, and your site will be penalised ( you will not rank high for relevant keywords).

QUOTE: “Those practices, referred to in the patent as “rank-modifying spamming techniques,” may involve techniques such as: Keyword stuffing, Invisible text, Tiny text, Page redirects, Meta tags stuffing, and Link-based manipulation.” Bill Slawski, Google Rank-Modifying Spammers Patent

These penalties can last years if not addressed, as some penalties expire and some do not – and Google wants you to clean up any violations.

Google does not want you to try and modify where you rank, easily. Critics would say Google would prefer you paid them to do that using Google Adwords.

The problem for Google is – ranking high in Google organic listings is a real social proof for a business, a way to avoid PPC costs and still, simply, the BEST WAY to drive VALUABLE traffic to a site.

It’s FREE, too, once you’ve met the always-increasing criteria it takes to rank top.

‘User Experience’ Does Matter

Is User Experience A Ranking Factor?

User experience is mentioned 16 times in the main content of the quality raters guidelines (official PDF), but we have been told by Google it is not, per say, a classifiable ‘ranking factor‘ on desktop search, at least.

QUOTE: “On mobile, sure, since UX is the base of the mobile friendly update. On desktop currently no. (Gary Illyes: Google, May 2015)

While UX, we are told, is not literally a ‘ranking factor’, it is useful to understand exactly what Google calls a ‘poor user experience’ because if any poor UX signals are identified on your website, that is not going to be a healthy thing for your rankings anytime soon.

Matt Cutts (no longer with Google) consistent SEO advice was to focus on a satisfying user experience.

What is Bad UX?

For Google – rating UX, at least from a quality rater’s perspective, revolves around marking the page down for:

  • Misleading or potentially deceptive design
  • sneaky redirects
  • malicious downloads and
  • spammy user-generated content (unmoderated comments and posts)
  • Low-quality MC (main content of the page)
  • Low-quality or distracting SC (supplementary content)

In short, nobody is going to advise you to create a poor UX, on purpose, in light of Google’s algorithms and human quality raters who are showing an obvious interest in this stuff. Google is rating mobile sites on what it classes is frustrating UX – although on certain levels what Google classes as ‘UX’ might be quite far apart from what a UX professional is familiar with in the same ways as Google’s mobile rating tools differ from, for instance,  W3c Mobile testing tools.

Google is still, evidently, more interested in rating the main content of the webpage in question and the reputation of the domain the page is on – relative to your site, and competing pages on other domains.

A satisfying UX is can help your rankings, with second-order factors taken into consideration. A poor UX can seriously impact your human-reviewed rating, at least. Google’s punishing algorithms probably class pages as something akin to a poor UX if they meet certain detectable criteria e.g. lack of reputation or old-school SEO stuff like keyword stuffing a site.

If you are improving user experience by focusing primarily on the quality of the MC of your pages and avoiding – even removing – old-school SEO techniques – those certainly are positive steps to getting more traffic from Google in 2018 – and the type of content performance Google rewards is in the end largely at least about a satisfying user experience.

TAKE NOTE: Google is Moving To A ‘Mobile First‘ Index in 2018

Now that Google is determined to focus on ranking sites based on their mobile experience, the time is upon businesses to REALLY focus on delivering the fastest and most accessible DESKTOP and MOBILE friendly experience you can achieve.

Because if you DO NOT, your competition will, and Google may rank those pages above your own, in time.

QUOTE: ‘To make our results more useful, we’ve begun experiments to make our index mobile-first. Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results. Of course, while our index will be built from mobile documents, we’re going to continue to build a great search experience for all users, whether they come from mobile or desktop devices.

If you have a responsive site or a dynamic serving site where the primary content and markup is equivalent across mobile and desktop, you shouldn’t have to change anything.’ GOOGLE, 2017

NOTE: Google has a ‘Page-Heavy’ Penalty Algorithm

Present your most compelling material above the fold at any resolution – Google also has a ‘Page Heavy Algorithm’ – In short, if you have too many ads on your page, or if paid advertising obfuscates copy or causes an otherwise frustrating user experience for Google’s visitors, your page can be demoted in SERPs:

QUOTE: ‘sites that don’t have much content “above-the-fold” can be affected.’ Google:

Keep an eye on where you put your ads or other sponsored content – get in the way of your main content copy of the page you are designing and you could see traffic decline.

NOTE: Google has an ‘Interstitial and Pop-Up‘ Penalty Algorithm

Bear in mind also Google now (since January 2017) has an Interstitial and Pop-Up ‘penalty so AVOID creating a marketing strategy that relies on this.

QUOTE: ‘Here are some examples of techniques that make content less accessible to a user:

(1) Showing a popup that covers the main content, either immediately after the user navigates to a page from the search results, or while they are looking through the page.

(2) Displaying a standalone interstitial that the user has to dismiss before accessing the main content.

(3) Using a layout where the above-the-fold portion of the page appears similar to a standalone interstitial, but the original content has been inlined underneath the fold. Google’

EXIT POP-UPS (like the one I use on this site) evidently do NOT interfere with a readers enjoyment and access to the primary content on a page. At the moment, these type of pop-ups seems to be OK for now (and do increase subscriber signups, too).

What is the PURPOSE of your page?

Is it to “sell products or services”, “to entertain” or “ to share information about a topic.”

MAKE THE PURPOSE OF YOUR PAGE SINGULAR and OBVIOUS to help quality raters and algorithms.

The name of the game in 2018 (if you’re not faking everything) is VISITOR SATISFACTION.

If a visitor lands on your page – are they satisfied and can they successfully complete WHY they are there?

Ranking could be based on a ‘duration metric’

QUOTE: “The average duration metric for the particular group of resources can be a statistical measure computed from a data set of measurements of a length of time that elapses between a time that a given user clicks on a search result included in a search results web page that identifies a resource in the particular group of resources and a time that the given user navigates back to the search results web page. …Thus, the user experience can be improved because search results higher in the presentation order will better match the user’s informational needs.” High Quality Search Results based on Repeat Clicks and Visit Duration

Rankings could be based on a ‘duration performance score

QUOTE: “The duration performance scores can be used in scoring resources and websites for search operations. The search operations may include scoring resources for search results, prioritizing the indexing of websites, suggesting resources or websites, protecting particular resources or websites from demotions, precluding particular resources or websites from promotions, or other appropriate search operations.” A Panda Patent on Website and Category Visit Durations

What Makes A Page Spam?

What makes a page spam?:

  • Hidden text or links – may be exposed by selecting all page text and scrolling to the bottom (all text is highlighted), disabling CSS/Javascript, or viewing source code
  • Sneaky redirects – redirecting through several URLs, rotating destination domains cloaking with JavaScript redirects and 100% frame
  • Keyword stuffing – no percentage or keyword density given; this is up to the rater
  • PPC ads that only serve to make money, not help users
  • Copied/scraped content and PPC ads
  • Feeds with PPC ads
  • Doorway pages – multiple landing pages that all direct user to the same destination
  • Templates and other computer-generated pages mass-produced, marked by copied content and/or slight keyword variations
  • Copied message boards with no other page content
  • Fake search pages with PPC ads
  • Fake blogs with PPC ads, identified by copied/scraped or nonsensical spun content
  • Thin affiliate sites that only exist to make money, identified by checkout on a different domain, image properties showing origination at another URL, lack of original content, different WhoIs registrants of the two domains in question
  • Pure PPC pages with little to no content
  • Parked domains

There’s more on this announcement at SEW.

If A Page Exists Only To Make Money, The Page Is Spam, to Google

QUOTE: “If A Page Exists Only To Make Money, The Page Is Spam” GOOGLE

That statement above in the original quality rater guidelines is standout and should be a heads up to any webmaster out there who thinks they are going to make a “fast buck” from Google organic listings in 2018.

It should, at least, make you think about the types of pages you are going to spend your valuable time making.

Without VALUE ADD for Google’s users – don’t expect to rank high for commercial keywords.

If you are making a page today with the sole purpose of making money from it – and especially with free traffic from Google – you obviously didn’t get the memo.

Consider this statement from a manual reviewer:

QUOTE: “…when they DO get to the top, they have to be reviewed with a human eye in order to make sure the site has quality.” potpiegirl

It’s worth remembering:

  • If A Page Exists Only To Make Money, The Page Is Spam
  • If A Site Exists Only To Make Money, The Site Is Spam

This is how what you make will be judged – whether it is fair or not.


Of course not and in some cases, it levels the playing field especially if you are willing to:

  • Differentiate yourself
  • Be Remarkable
  • Be accessible
  • Add unique content to your site
  • Help users in an original way

Google doesn’t care about search engine optimizers or the vast majority of websites but the search engine giant DOES care about HELPING ITS OWN USERS.

So, if you are helping visitors the come from Google – and not by just directing them to another website – you are probably doing one thing right at least.

With this in mind – I am already building affiliate sites differently, for instance.

Doorway Pages

Google algorithms consistently target sites with doorway pages in quality algorithm updates. The definition of  a “doorway page” can change over time.

For example in the images below (from 2011), all pages on the site seemed to be hit with a -50+ ranking penalty for every keyword phrase the website ranked for.

At first Google rankings for commercial keyword phrases collapsed which led to somewhat of a “traffic apocalypse“:

Google detected doorway pages penalty

The webmaster then received an email from Google via Google Webmaster Tools (now called Google Search Console):

QUOTE: “Google Webmaster Tools notice of detected doorway pages on xxxxxxxx – Dear site owner or webmaster of xxxxxxxx, We’ve detected that some of your site’s pages may be using techniques that are outside Google’s Webmaster Guidelines. Specifically, your site may have what we consider to be doorway pages – groups of “cookie cutter” or low-quality pages. Such pages are often of low value to users and are often optimized for single words or phrases in order to channel users to a single location. We believe that doorway pages typically create a frustrating user experience, and we encourage you to correct or remove any pages that violate our quality guidelines. Once you’ve made these changes, please submit your site for reconsideration in Google’s search results. If you have any questions about how to resolve this issue, please see our Webmaster Help Forum for support.” Google Search Quality Team

At the time, I didn’t immediately class the pages on the affected sites in question as doorway pages. It’s evident Google’s definition of a doorways changes over time.

A lot of people do not realise they are building what Google classes as doorway pages….. and it was indicative to me that ….. what you intend to do with the traffic Google sends you may in itself, be a ranking factor not too often talked about.

What Does Google Classify As Doorway Pages?

Google classes many types of pages as doorway pages. Doorway pages can be thought of as lots of pages on a website designed to rank for very specific keywords using minimal original text content e.g. location pages often end up looking like doorway pages.

Its actually a very interesting aspect of modern SEO and one that is constantly shifting.

In the recent past, location-based SERPs were often lower-quality, and so Google historically ranked location-based doorway pages in many instances.

There is some confusion for real businesses who THINK they SHOULD rank for specific locations where they are not geographically based and end up using doorway-type pages to rank for these locations.

What Google Says About Doorway Pages

Google said a few years ago:

QUOTE: “For example, searchers might get a list of results that all go to the same site. So if a user clicks on one result, doesn’t like it, and then tries the next result in the search results page and is taken to that same site that they didn’t like, that’s a really frustrating experience.” Google

A question about using content spread across multiple pages and targeting different geographic locations on the same site was asked in the recent Hangout with Google’s John Meuller

QUOTE: “We are a health services comparison website…… so you can imagine that for the majority of those pages the content that will be presented in terms of the clinics that will be listed looking fairly similar right and the same I think holds true if you look at it from the location …… we’re conscious that this causes some kind of content duplication so the question is is this type … to worry about? “

Bearing in mind that (while it is not the optimal use of pages) Google does not ‘penalise’ a website for duplicating content across internal pages in a non-malicious way, John’s clarification of location-based pages on a site targeting different regions is worth noting:

QUOTE: “For the mostpart it should be fine I think the the tricky part that you need to be careful about is more around doorway pages in the sense that if all of these pages end up with the same business then that can look a lot like a doorway page but like just focusing on the content duplication part that’s something that for the most part is fine what will happen there is will index all of these pages separately because from  from a kind of holistic point of view these pages are unique they have unique content on them they might have like chunks of text on them which are duplicated but on their own these pages are unique so we’ll index them separately and in the search results when someone is searching for something generic and we don’t know which of these pages are the best ones we’ll pick one of these pages and show that to the user and filter out the other variations of that that page so for example if someone in Ireland is just looking for dental bridges and you have a bunch of different pages for different kind of clinics that offer the service and probably will pick one of those pages and show those in the search results and filter out the other ones.

But essentially the idea there is that this is a good representative of the the content from your website and that’s all that we would show to users on the other hand if someone is specifically looking for let’s say dental bridges in Dublin then we’d be able to show the appropriate clinic that you have on your website that matches that a little bit better so we’d know dental bridges is something that you have a lot on your website and Dublin is something that’s unique to this specific page so we’d be able to pull that out and to show that to the user like that so from a pure content duplication point of view that’s not really something I totally worry about.

I think it makes sense to have unique content as much as possible on these pages but it’s not not going to like sync the whole website if you don’t do that we don’t penalize a website for having this kind of deep duplicate content and kind of going back to the first thing though with regards to doorway pages that is something I definitely look into to make sure that you’re not running into that so in particular if this is like all going to the same clinic and you’re creating all of these different landing pages that are essentially just funneling everyone to the same clinic then that could be seen as a doorway page or a set of doorway pages on our side and it could happen that the web spam team looks at that and says this is this is not okay you’re just trying to rank for all of these different variations of the keywords and the pages themselves are essentially all the same and they might go there and say we need to take a manual action and remove all these pages from search so that’s kind of one thing to watch out for in the sense that if they are all going to the same clinic then probably it makes sense to create some kind of a summary page instead whereas if these are going to two different businesses then of course that’s kind of a different situation it’s not it’s not a doorway page situation.”

The takeaway here is that if you have LOTS of location pages serving ONE SINGLE business in one location, then those are very probably classed as some sort of doorway pages, and probably old-school SEO techniques for these type of pages will see them classed as lower-quality – or even – spammy pages.

Google has long warned webmasters about using Doorway pages but many sites still employ them, because, either:

  • their business model depends on it for lead generation
  • the alternative is either a lot of work or
  • they are not creative enough or
  • they are not experienced enough to avoid the pitfalls of having lower-quality doorway pages on a site or
  • they are experienced enough to understand what impact they might be having on a site quality score

Google has a doorway page algorithm which no doubt they constantly improve upon. Google warned:

QUOTE: “Over time, we’ve seen sites try to maximize their “search footprint” without adding clear, unique value. These doorway campaigns manifest themselves as pages on a site, as a number of domains, or a combination thereof. To improve the quality of search results for our users, we’ll soon launch a ranking adjustment to better address these types of pages. Sites with large and well-established doorway campaigns might see a broad impact from this change.” Google 2015

If you have location pages that serve multiple locations or businesses, then those are not doorway pages and should be improved uniquely to rank better, according to John’s advice.

Are You Making Doorway Pages?

Search Engine Land offered search engine optimisers this clarification from Google:

QUOTE: “How do you know if your web pages are classified as a “doorway page?” Google said asked yourself these questions:

  • Is the purpose to optimize for search engines and funnel visitors into the actual usable or relevant portion of your site, or are they an integral part of your site’s user experience?
  • Are the pages intended to rank on generic terms yet the content presented on the page is very specific?
  • Do the pages duplicate useful aggregations of items (locations, products, etc.) that already exist on the site for the purpose of capturing more search traffic?
  • Are these pages made solely for drawing affiliate traffic and sending users along without creating unique value in content or functionality?
  • Do these pages exist as an “island?” Are they difficult or impossible to navigate to from other parts of your site? Are links to such pages from other pages within the site or network of sites created just for search engines?”

Barry Schwartz at Seroundtable picked this up:

QUOTE: “Well, a doorway page would be if you have a large collection of pages where you’re just like tweaking the keywords on those pages for that.

I think if you focus on like a clear purpose for the page that’s outside of just I want to rank for this specific variation of the keyword then that’s that’s usually something that leads to a reasonable result.

Whereas if you’re just taking a list of keywords and saying I need to make pages for each of these keywords and each of the permutations that might be for like two or three of those keywords then that’s just creating pages for the sake of keywords which is essentially what we look at as a doorway.”

Note I underlined the following statement:

QUOTE: “focus on like a clear purpose for the page that’s outside of just I want to rank for this specific variation of the keyword.”

That is because sometimes, often, in fact, there is an alternative to doorway pages for location pages that achieve essentially the same thing for webmasters.

Naturally, business owners want to rank for lots of keywords in organic listings with their website. The challenge for webmasters and SEO is that Google doesn’t want business owners to rank for lots of keywords using autogenerated content especially when that produces A LOT of pages on a website using (for instance) a list of keyword variations page-to-page.

QUOTE: “7.4.3 Automatically ­Generated Main Content Entire websites may be created by designing a basic template from which hundreds or thousands of pages are created, sometimes using content from freely available sources (such as an RSS feed or API). These pages are created with no or very little time, effort, or expertise, and also have no editing or manual curation. Pages and websites made up of auto­generated content with no editing or manual curation, and no original content or value added for users, should be rated Lowest.” Google Search Quality Evaluator Guidelines 2017

The end-result is webmasters create doorway pages without even properly understanding what they represent to Google and without realising Google will not index all these autogenerated pages.

WIKIPEDIA says of doorway pages:

QUOTE: “Doorway pages are web pages that are created for spamdexing. This is for spamming the index of a search engine by inserting results for particular phrases with the purpose of sending visitors to a different page.”


“Spamdexing, which is a word derived from “spam” and “indexing,” refers to the practice of search engine spamming. It is a form of SEO spamming.”

Google says:

Doorways are sites or pages created to rank highly for specific search queries. They are bad for users because they can lead to multiple similar pages in user search results, where each result ends up taking the user to essentially the same destination. They can also lead users to intermediate pages that are not as useful as the final destination.

Here are some examples of doorways:

  • Having multiple domain names or pages targeted at specific regions or cities that funnel users to one page
  • Pages generated to funnel visitors into the actual usable or relevant portion of your site(s)
  • Substantially similar pages that are closer to search results than a clearly defined, browseable hierarchy”

I’ve bolded:

“Doorways are sites or pages created to rank highly for specific search queries”

Take note: It is not just location pages that are classed as doorway pages:

QUOTE: “For Google, that’s probably overdoing it and ends up in a situation you basically create a doorway site …. with pages of low value…. that target one specific query.” John Mueller 2018

If your website is made up of lower-quality doorway type pages using old SEO-techniques (which more and more labelled as spam in 2018) then Google will not index all of the pages and your website ‘quality score’ is probably going to be negatively impacted.


If you are making keyword rich location pages for a single business website, there’s a risk these pages will be classed doorway pages in 2018.

If you know you have VERY low-quality doorway pages on your site, you should remove them or rethink your SEO strategy if you want to rank high in Google for the long term.

Location-based pages are suitable for some kind of websites, and not others.

What Is E.A.T.?

Google aims to rank pages where the author has some demonstrable expertise on experience in the subject-matter they are writing about. These ‘quality ratings’ (performed by human evaluators) are based on (E.A.T. or EAT or E-A-T) which is simply  ‘Expertise, Authoritativeness, Trustworthiness‘ of the ‘Main Content of a page

QUOTE: “Expertise, Authoritativeness, Trustworthiness: This is an important quality characteristic. …. Important: Lacking appropriate E­A­T is sufficient reason to give a page a Low quality rating.” Google Search Quality Evaluator Guidelines 2017


QUOTE: “The amount of expertise, authoritativeness, and trustworthiness (E­A­T) that a webpage/website has is very important. MC quality and amount, website information, and website reputation all inform the E­A­T of a website. Think about the topic of the page. What kind of expertise is required for the page to achieve its purpose well? The standard for expertise depends on the topic of the page.” Google Search Quality Evaluator Guidelines 2017

Who links to you can inform the E-A-T of your website.

Consider this:

QUOTE: “I asked Gary (Illyes from Google) about E-A-T. He said it’s largely based on links and mentions on authoritative sites. i.e. if the Washington post mentions you, that’s good. He recommended reading the sections in the QRG on E-A-T as it outlines things well.” Marie Haynes, Pubcon 2018

Google is still a ‘link-based’ search engine ‘under-the-hood’ but it takes so much more to stick a website at the top of search engine results pages (SERPs) in 2018 than it used to.

Main Content (MC) of a Page

QUOTE: “(Main CONTENT) is (or should be!) the reason the page exists.” Google Search Quality Evaluator Guidelines 2017

What Is Google Focused On?

Google is concerned with the PURPOSE of a page, the MAIN CONTENT (MC) of a page, the SUPPLEMENTARY CONTENT of a page and HOW THAT PAGE IS monetised, and if that monetisation impacts the user experience of consuming the MAIN CONTENT.

Webmasters need to be careful when optimising a website for CONVERSION first if that gets in the way of a user’s consumption of the main content on the page.

Google also has a “Page Layout Algorithm” that demotes pages with a lot of advertising “above the fold” or that forces users to scroll past advertisements to get to the Main Content of the page.

High-quality supplementary content should “(contribute) to a satisfying user experience on the page and website.” and it should NOT interfere or distract from the MC.

Google says,“(Main CONTENT) is (or should be!) the reason the page exists.” so this is probably the most important part of the page, to Google.

Supplementary Content (SC) on a Page

An example of “supplementary” content is “navigation links that allow users to visit other parts of the website” and “footers” and “headers.”

What is SC (supplementary content)?

QUOTE: “Supplementary Content contributes to a good user experience on the page, but does not directly help the page achieve its purpose. SC is created by Webmasters and is an important part of the user experience. One common type of SC is navigation links which allow users to visit other parts of the website. Note that in some cases, content behind tabs may be considered part of the SC of the page.” Google Search Quality Evaluator Guidelines 2017

When it comes to a web page and positive UX, Google talks a lot about the functionality and utility of Helpful Supplementary Content – e.g. helpful navigation links for users (that are not, generally, MC or Ads).

QUOTE: “To summarize, a lack of helpful SC may be a reason for a Low quality rating, depending on the purpose of the page and the type of website. We have different standards for small websites which exist to serve their communities versus large websites with a large volume of webpages and content. For some types of “webpages,” such as PDFs and JPEG files, we expect no SC at all.” Google Search Quality Evaluator Guidelines 2017

It is worth remembering that Good supplementary content cannot save Poor main content from a low-quality page rating:

QUOTE: “Main Content is any part of the page that directly helps the page achieve its purpose“. Google Search Quality Evaluator Guidelines 2017

Good SC seems to certainly be a sensible option. It always has been.

Key Points about SC

  1. Supplementary Content can be a large part of what makes a High-quality page very satisfying for its purpose.
  2. Helpful SC is content that is specifically targeted to the content and purpose of the page.
  3. Smaller websites such as websites for local businesses and community organizations, or personal websites and blogs, may need less SC for their purpose.
  4. A page can still receive a High or even Highest rating with no SC at all.

Here are the specific quotes containing the term SC:

  1. Supplementary Content contributes to a good user experience on the page, but does not directly help the page achieve its purpose.
  2. SC is created by Webmasters and is an important part of the user experience. One common type of SC is navigation links which allow users to visit other parts of the website. Note that in some cases, content behind tabs may be considered part of the SC of the page.
  3. SC which contributes to a satisfying user experience on the page and website. – (A mark of a high-quality site – this statement was repeated 5 times)
  4. However, we do expect websites of large companies and organizations to put a great deal of effort into creating a good user experience on their website, including having helpful SC. For large websites, SC may be one of the primary ways that users explore the website and find MC, and a lack of helpful SC on large websites with a lot of content may be a reason for a Low rating.
  5. However, some pages are deliberately designed to shift the user’s attention from the MC to the Ads, monetized links, or SC. In these cases, the MC becomes difficult to read or use, resulting in a poor user experience. These pages should be rated Low.
  6. Misleading or potentially deceptive design makes it hard to tell that there’s no answer, making this page a poor user experience.
  7. Redirecting is the act of sending a user to a different URL than the one initially requested. There are many good reasons to redirect from one URL to another, for example, when a website moves to a new address. However, some redirects are designed to deceive search engines and users. These are a very poor user experience, and users may feel tricked or confused. We will call these “sneaky redirects.” Sneaky redirects are deceptive and should be rated Lowest.
  8. However, you may encounter pages with a large amount of spammed forum discussions or spammed user comments. We’ll consider a comment or forum discussion to be “spammed” if someone posts unrelated comments which are not intended to help other users, but rather to advertise a product or create a link to a website. Frequently these comments are posted by a “bot” rather than a real person. Spammed comments are easy to recognize. They may include Ads, download, or other links, or sometimes just short strings of text unrelated to the topic, such as “Good,” “Hello,” “I’m new here,” “How are you today,” etc. Webmasters should find and remove this content because it is a bad user experience.
  9. The modifications make it very difficult to read and are a poor user experience. (Lowest quality MC (copied content with little or no time, effort, expertise, manual curation, or added value for users))
  10. Sometimes, the MC of a landing page is helpful for the query, but the page happens to display porn ads or porn links outside the MC, which can be very distracting and potentially provide a poor user experience.
  11. The query and the helpfulness of the MC have to be balanced with the user experience of the page.
  12. Pages that provide a poor user experience, such as pages that try to download malicious software, should also receive low ratings, even if they have some images appropriate for the query.

The Importance of Unique Content For Your Website

QUOTE: “Duplicated content is often not manipulative and is commonplace on many websites and often free from malicious intent. Copied content can often be penalised algorithmically or manually. Duplicate content is not penalised, but this is often not an optimal set-up for pages, either. Be VERY careful ‘spinning’ ‘copied’ text to make it unique!” Shaun Anderson, Hobo, 2018

From a quality page point of view, duplicate content (or rather, copied content) can a low-quality indicator.

Boilerplate (especially spun) text can be another low-quality indicator.

If your website is tarnished with these practices – it is going to be classed ‘low-quality’ by some part of the Google algorithm:

  • If all you have on your page are indicators of low-quality – you have a low-quality page in 2018 – full stop.
  • If your entire website is made up of pages like that, you have a low-quality website.
  • If you have manipulative backlinks, then that’s a recipe for disaster.

Balancing Conversions With Usability & User Satisfaction

Take pop-up windows or window pop-unders as an example:

According to usability expert Jakob Nielson, 95% of website visitors hated unexpected or unwanted pop-up windows, especially those that contain unsolicited advertising.

In fact, Pop-Ups have been consistently voted the Number 1 Most Hated Advertising Technique since they first appeared many years ago.

Website accessibility aficionados will point out:

  • creating a new browser window should be the authority of the user
  • pop-up new windows should not clutter the user’s screen.
  • all links should open in the same window by default. (An exception, however, may be made for pages containing a links list. It is convenient in such cases to open links in another window so that the user can come back to the links page easily. Even in such cases, it is advisable to give the user a prior note that links would open in a new window).
  • Tell visitors they are about to invoke a pop-up window (using the link <title> attribute)
  • Popup windows do not work in all browsers.
  • They are disorienting for users
  • Provide the user with an alternative.

It is, however, an inconvenient truth for accessibility and usability aficionados to hear that pop-ups can be used successfully to vastly increase signup subscription conversions.

EXAMPLE: TEST With Using A Pop Up Window

Pop-ups suck, everybody seems to agree. Here’s the little test I carried out on a subset of pages, an experiment to see if pop-ups work on this site to convert more visitors to subscribers.

I  tested it out when I didn’t blog for a few months and traffic was very stable.


Testing Pop Up Windows Results

Pop Up WindowTotal %Change
WK1 OnWk2 Off

That’s a fair increase in email subscribers across the board in this small experiment on this site. Using a pop up does seem to have an immediate impact.

I have since tested it on and off for a few months and the results from the small test above have been repeated over and over.

I’ve tested different layouts and different calls to actions without pop-ups, and they work too, to some degree, but they typically take a bit longer to deploy than activating a plugin.

I don’t really like pop-ups as they have been an impediment to web accessibility but it’s stupid to dismiss out-of-hand any technique that works.

In my tests, using pop-ups really seemed to kill how many people share a post in social media circles.

With Google now showing an interest with interstitials (especially on mobile versions of your site), I would be very nervous about employing a pop-up window that obscures the primary reason for visiting the page.

If Google detects any user dissatisfaction, this can be very bad news for your rankings.

QUOTE: “Interstitials that hide a significant amount of content provide a bad search experience” Google, 2015

I am, at the moment, using an exit strategy pop-up window as hopefully by the time a user sees this device, they are FIRST satisfied with my content they came to read. I can recommend this as a way to increase your subscribers, at the moment, with a similar conversion rate than pop-ups – if NOT BETTER.

I think, as an optimiser, it is sensible to convert customers without using techniques that potentially negatively impact Google rankings.

Do NOT let conversion get in the way of the PRIMARY reason a Google visitor is on ANY PARTICULAR PAGE or you risk Google detecting relative dissatisfaction with your site and that is not going to help you as Google’s gets better at working out what ‘quality’ actually means.

NOTE: User Experience Across Multiple Devices & Screen Resolutions

User Experience is a big part of successful search engine optimisation in 2018 – and a big factor in the Google Panda algorithm.

When thinking of designing a web-page in 2018, you are going to have to consider where certain elements appear on that webpage, especially advertisements.

Google will tell you if the ads on your website are annoying users which may impact the organic traffic Google sends you.

Annoying ads on your web pages has long been a problem for users (probably) and Google, too. Even if they do make you money.

What ads are annoying?

  • ‘ the kind that blares music unexpectedly ‘ or
  • ‘ a pop-up on top of the one thing we’re trying to find ‘

Apparently ‘frustrating experiences can lead people to install ad blockers and when ads are blocked publishers can’t make money’.

The video goes on to say:

QUOTE: a survey of hundreds of ad experiences by the Coalition for better ads has shown that people don’t hate all ads just annoying ones eliminating these ads from your site can make a huge difference ‘ Google, 2017

The New Ad Experience Report In Google Search Console

The ad experience report is part of Google Search Console.

The report:

QUOTE: ‘ makes it easy to find annoying ads on your site and replace them with user-friendly ones ‘. Google, 2017

NOTE: Which type of Adverts Annoys Users?

Google states:

“The Ad Experience Report is designed to identify ad experiences that violate the Better Ads Standards, a set of ad experiences the industry has identified as being highly annoying to users. If your site presents violations, the Ad Experience Report may identify the issues to fix.”

The Better Ads Standards people are focused on the following annoying ads:

Desktop Web Experiences

Which type of ads on desktop devices annoy users the most?

  • Pop-up Ads
  • Auto-playing Video Ads with Sound
  • Prestitial Ads with Countdown
  • Large Sticky Ads

Mobile Web Experiences

Which type of ads on mobile devices annoy users the most?

  • Pop-up Ads
  • Auto-playing Video Ads with Sound
  • Prestitial Ads
  • Postitial Ads with Countdown
  • Ad Density Higher Than 30%
  • Full-screen Scroll through Ads
  • Flashing Animated Ads
  • Large Sticky Ads

Google says in the video:

QUOTE: “once you’ve fixed the issues you can submit your site for a real review. We’ll look at a new sample of pages and may find out experiences that were missed previously. We’ll e-mail you when the results are in.” Google, 2017

Google offers some solutions to using pop-ups if you are interested

QUOTE: “In place of a pop-up try a full-screen inline ad. It offers the same amount of screen real estate as pop-ups without covering up any content. Fixing the problem depends on the issue you have for example if it’s a pop-up you’ll need to remove all the pop-up ads from your site but if the issue is high ad density on a page you’ll need to reduce the number of ads” Google, 2017

Your Website Will Recieve A LOW RATING If It Has Annoying Or Distracting Ads or annoying Secondary Content (SC)

Google has long warned about web page advertisements and distractions on a web page that results in a poor user experience.

The following specific examples are taken from the Google Search Quality Evaluator Guidelines 2017.

6.3 Distracting/Disruptive/Misleading Titles, Ads, and Supplementary Content

Some Low-quality pages have adequate MC (main content on the page) present, but it is difficult to use the MC due to disruptive, highly distracting, or misleading Ads/SC. Misleading titles can result in a very poor user experience when users click a link only to find that the page does not match their expectations.

6.3.1 Ads or SC that disrupt the usage of MC

While we expect Ads and SC to be visible, some Ads, SC or interstitial pages (i.e., pages displayed before or after the content you are expecting) make it extremely difficult to use the MC. Pages that disrupt the use of the MC should be given a Low rating.

Google gives some examples:

  • ‘Ads that actively float over the MC as you scroll down the page and are difficult to close. It can be very hard to use MC when it is actively covered by moving, difficult­-to-­close Ads.’
  • ‘An interstitial page that redirects the user away from the MC without offering a path back to the MC.’

6.3.2 Prominent presence of distracting SC or Ads

Google says:

“Users come to web pages to use the MC. Helpful SC and Ads can be part of a positive user experience, but distracting SC and Ads make it difficult for users to focus on and use the MC.

Some webpages are designed to encourage users to click on SC that is not helpful for the purpose of the page. This type of SC is often distracting or prominently placed in order to lure users to highly monetized pages.

Either porn SC or Ads containing porn on non­Porn pages can be very distracting or even upsetting to users. Please refresh the page a few times to see the range of Ads that appear, and use your knowledge of the locale and cultural sensitivities to make your rating. For example, an ad for a model in a revealing bikini is probably acceptable on a site that sells bathing suits. However, an extremely graphic porn ad may warrant a Low (or even Lowest) rating.”

6.3.3 Misleading Titles, Ads, or SC

Google says:

QUOTE: “It should be clear what parts of the page are MC, SC, and Ads. It should also be clear what will happen when users interact with content and links on the webpage. If users are misled into clicking on Ads or SC, or if clicks on Ads or SC leave users feeling surprised, tricked or confused, a Low rating is justified.

  • At first glance, the Ads or SC appear to be MC. Some users may interact with Ads or SC, believing that the Ads or SC is the MC.Ads appear to be SC (links) where the user would expect that clicking the link will take them to another page within the same website, but actually take them to a different website. Some users may feel surprised or confused when clicking SC or links that go to a page on a completely different website.
  • Ads or SC that entice users to click with shocking or exaggerated titles, images, and/or text. These can leave users feeling disappointed or annoyed when they click and see the actual and far less interesting content.
  • Titles of pages or links/text in the SC that are misleading or exaggerated compared to the actual content of the page. This can result in a very poor user experience when users read the title or click a link only to find that the page does not match their expectations. “

The important thing to know here is:

QUOTE: “Summary: The Low rating should be used for disruptive or highly distracting Ads and SC. Misleading Titles, Ads, or SC may also justify a Low rating. Use your judgment when evaluating pages. User expectations will differ based on the purpose of the page and cultural norms.”

… and that Google does not send free traffic to sites it rates as low quality.

Recommendation: Remove annoying ads on your site.

How To Fix Issues Found In the Google Ad Experience Report

  • you will need to sign up for Google Search Console (AKA Google Webmaster Tools)
  • review the Ad experience report
  • if your site hasn’t been reviewed or as past review, the report won’t show anything
  • if your review status is warning or failing violations will be listed in the “what we found” column “ad reviews report” on a sample of pages from both desktop and mobile versions of your site
  • ‘if negative ad experiences are found they are listed separately in the report since a bad experience on mobile may not be as annoying on desktop’
  • Google will highlight ‘site design issues such as pop-ups or large sticky ads‘ and rather cleverly will show you ‘a video of the ad that was flagged
  • ‘Creative issues are shown on your site through ad tags like flashing animated ads or autoplay videos with sound’
  • remove annoying ads from your site
  • submit your site for a review of your ad experience in Search Console.

What Does Google Mean By “Low-Quality“?

Google has a history of classifying your site as some type of entity, and whatever that is, you don’t want a low-quality label on it. Put there by algorithm or human. Manual evaluators might not directly impact your rankings, but any signal associated with Google marking your site as low-quality should probably be avoided.

If you are making websites to rank in Google without unnatural practices, you are going to have to meet Google’s expectations in the Quality Raters Guidelines.

Google says:

QUOTE: “Low-quality pages are unsatisfying or lacking in some element that prevents them from achieving their purpose well.” Google, 2017

‘Sufficient Reason’

There is ‘sufficient reason’ in some cases to immediately mark the page down in some areas, and Google directs quality raters to do so:

  • An unsatisfying amount of MC is a sufficient reason to give a page a Low-quality rating.
  • Low-quality MC is a sufficient reason to give a page a Low-quality rating.
  • Lacking appropriate E-A-T is sufficient reason to give a page a Low-quality rating.
  • Negative reputation is sufficient reason to give a page a Low-quality rating.

What are low-quality pages?

When it comes to defining what a low-quality page is, Google is evidently VERY interested in the quality of the Main Content (MC) of a page:

Main Content (MC)

Google says MC should be the ‘main reason a page exists’.

  • The quality of the MC is low.
  • There is an unsatisfying amount of MC for the purpose of the page.
  • There is an unsatisfying amount of website information.


  • This content has many problems: poor spelling and grammar, complete lack of editing, inaccurate information. The poor quality of the MC is a reason for the Lowest+ to Low rating. In addition, the popover ads (the words that are double underlined in blue) can make the main content difficult to read, resulting in a poor user experience.
  • Pages that provide a poor user experience, such as pages that try to download malicious software, should also receive low ratings, even if they have some images appropriate for the query.


  • If a page seems poorly designed, take a good look. Ask yourself if the page was deliberately designed to draw attention away from the MC. If so, the Low rating is appropriate.
  • The page design is lacking. For example, the page layout or use of space distracts from the MC, making it difficult to use the MC.


  • You should consider who is responsible for the content of the website or content of the page you are evaluating. Does the person or organization have sufficient expertise for the topic? If expertise, authoritativeness, or trustworthiness is lacking, use the Low rating.
  • There is no evidence that the author has medical expertise. Because this is a YMYL medical article, lacking expertise is a reason for a Low rating.
  • The author of the page or website does not have enough expertise for the topic of the page and/or the website is not trustworthy or authoritative for the topic. In other words, the page/website is lacking E-A-T.

After page content, the following are given the most weight in determining if you have a high-quality page.


  • Unhelpful or distracting SC that benefits the website rather than helping the user is a reason for a Low rating.
  • The SC is distracting or unhelpful for the purpose of the page.
  • The page is lacking helpful SC.
  • For large websites, SC may be one of the primary ways that users explore the website and find MC, and a lack of helpful SC on large websites with a lot of content may be a reason for a Low rating


  • For example, an ad for a model in a revealing bikini is probably acceptable on a site that sells bathing suits, however, an extremely distracting and graphic porn ad may warrant a Low rating.


  • If the website feels inadequately updated and inadequately maintained for its purpose, the Low rating is probably warranted.
  • The website is lacking maintenance and updates.


  • Credible negative (though not malicious or financially fraudulent) reputation is a reason for a Low rating, especially for a YMYL page.
  • The website has a negative reputation.


When it comes to Google assigning your page the lowest rating, you are probably going to have to go some to hit this, but it gives you a direction you want to ensure you avoid at all costs.

Google says throughout the document, that there are certain pages that…

QUOTE: “should always receive the Lowest rating” Google, 2017

..and these are presented below. Note – These statements below are spread throughout the raters document and not listed the way I have listed them here. I don’t think any context is lost presenting them like this, and it makes it more digestible.

Anyone familiar with Google Webmaster Guidelines will be familiar with most of the following:

  • True lack of purpose pages or websites.
    • Sometimes it is difficult to determine the real purpose of a page.
  • Pages on YMYL websites with completely inadequate or no website information.
  • Pages or websites that are created to make money with little to no attempt to help users.
  • Pages with extremely low or lowest quality MC.
    • If a page is deliberately created with no MC, use the Lowest rating. Why would a page exist without MC? Pages with no MC are usually lack of purpose pages or deceptive pages.
    • Webpages that are deliberately created with a bare minimum of MC, or with MC which is completely unhelpful for the purpose of the page, should be considered to have no MC
    • Pages deliberately created with no MC should be rated Lowest.
    • Important: The Lowest rating is appropriate if all or almost all of the MC on the page is copied with little or no time, effort, expertise, manual curation, or added value for users. Such pages should be rated Lowest, even if the page assigns credit for the content to another source. Important: The Lowest rating is appropriate if all or almost all of the MC on the page is copied with little or no time, effort, expertise, manual curation, or added value for users. Such pages should be rated Lowest, even if the page assigns credit for the content to another source.
  • Pages on YMYL (Your Money Or Your Life Transaction pages) websites with completely inadequate or no website information.
  • Pages on abandoned, hacked, or defaced websites.
  • Pages or websites created with no expertise or pages that are highly untrustworthy, unreliable, unauthoritative, inaccurate, or misleading.
  • Harmful or malicious pages or websites.
    • Websites that have extremely negative or malicious reputations. Also use the Lowest rating for violations of the Google Webmaster Quality Guidelines. Finally, Lowest+ may be used both for pages with many low-quality characteristics and for pages whose lack of a single Page Quality characteristic makes you question the true purpose of the page. Important: Negative reputation is sufficient reason to give a page a Low quality rating. Evidence of truly malicious or fraudulent behavior warrants the Lowest rating.
    • Deceptive pages or websites. Deceptive webpages appear to have a helpful purpose (the stated purpose), but are actually created for some other reason. Use the Lowest rating if a webpage page is deliberately created to deceive and potentially harm users in order to benefit the website.
    • Some pages are designed to manipulate users into clicking on certain types of links through visual design elements, such as page layout, organization, link placement, font color, images, etc. We will consider these kinds of pages to have deceptive page design. Use the Lowest rating if the page is deliberately designed to manipulate users to click on Ads, monetized links, or suspect download links with little or no effort to provide helpful MC.
    • Sometimes, pages just don’t “feel” trustworthy. Use the Lowest rating for any of the following: Pages or websites that you strongly suspect are scams
    • Pages that ask for personal information without a legitimate reason (for example, pages which ask for name, birthdate, address, bank account, government ID number, etc.). Websites that “phish” for passwords to Facebook, Gmail, or other popular online services. Pages with suspicious download links, which may be malware.
  • Use the Lowest rating for websites with extremely negative reputations.

Websites ‘Lacking Care and Maintenance’ Are Rated ‘Low Quality’.

QUOTE: “Sometimes a website may seem a little neglected: links may be broken, images may not load, and content may feel stale or out-dated. If the website feels inadequately updated and inadequately maintained for its purpose, the Low rating is probably warranted.”

“Broken” or Non-Functioning Pages Classed As Low Quality

Google gives clear advice on creating useful 404 pages:

  1. Tell visitors clearly that the page they’re looking for can’t be found
  2. Use language that is friendly and inviting
  3. Make sure your 404 page uses the same look and feel (including navigation) as the rest of your site.
  4. Consider adding links to your most popular articles or posts, as well as a link to your site’s home page.
  5. Think about providing a way for users to report a broken link.
  6. Make sure that your webserver returns an actual 404 HTTP status code when a missing page is requested

Google Is Not Going To Rank Low-Quality Pages When It Has Better Options

QUOTE: “Panda is an algorithm that’s applied to sites overall and has become one of our core ranking signals. It measures the quality of a site, which you can read more about in our guidelines. Panda allows Google to take quality into account and adjust ranking accordingly.” Google

If you have exact match instances of key-phrases on low-quality pages, mostly these pages won’t have all the compound ingredients it takes to rank high in Google in 2018.

I was working this, long before I understood it partially enough to write anything about it.

Here is an example of taking a standard page that did not rank for years and then turning it into a topic-oriented resource page designed to fully meet a user’s intent:

Graph: Example traffic levels to a properly optimised page

Google, in many instances, would rather send long-tail search traffic, like users using mobile VOICE SEARCH, for instance, to high-quality pages ABOUT a concept/topic that explains relationships and connections between relevant sub-topics FIRST, rather than to only send that traffic to low-quality pages just because they have the exact phrase on the page.

Google has algorithms that target low-quality content and these algorithms are actually trained in some part by human quality raters.

Quality Raters Do Not Directly Impact YOUR site

QUOTE: “Ratings from evaluators do not determine individual site rankings.” GOOGLE

While Google is on record as stating these quality raters do not directly influence where you rank (without more senior analysts making a call on the quality of your website, I presume?) – there are some things in this document, mostly of a user experience nature (UX) that all search engine optimisers and Webmasters of any kind should note going forward.

From my own experience, an unsatisfying user experience signal can impact rankings even on a REPUTABLE DOMAIN and even with SOUND, SATISFYING CONTENT.

Quality Bar – Always Rising – Always Relative?

You’ve got to imagine all these quality ratings are getting passed along to the engineers at Google in some form (at some stage) to improve future algorithms – and identify borderline cases.

This is the ‘quality’ bar I’ve mentioned a couple of times in past posts.

Google is always raising the bar – always adding new signals, sometimes, in time, taking signals away.

It helps them

  1. satisfy users
  2. control the bulk of transactional web traffic.

That positioning has always been a win-win for Google – and a recognisable strategy from them after all these years.

Take unnatural links out of the equation (which have a history of trumping most other signals) and you are left with page level, site level and off-site signals.

All of these quality signals will need to be addressed to insulate against “Google Panda” (if that can ever be fully successful, against an algorithm that is modified to periodically “ask questions” of your site and overall quality score).

Google holds different types of sites to different standards for different kinds of keywords which would suggest not all websites need all signals satisfied to rank well in SERPs – not ALL THE TIME.

OBSERVATION – You can have the content and the links – but if your site falls short on even a single user satisfaction signal (even if it is picked up by the algorithm, and not a human reviewer) then your rankings for particular terms could collapse – OR – rankings can be held back – IF Google thinks your organisation, with its resources, or ‘reputation, should be delivering a better user experience to users.

OBSERVATION: In the past, s site often rose (in terms of traffic numbers) in Google, before a Panda ‘penalty’.

It may be the case (and I surmise this) that the introduction of a certain SEO technique initially artificially raised your rankings for your pages in a way that Google’s algorithms do not approve of, and once that problem is spread out throughout your site, traffic begins to deteriorate or is slammed in a future algorithm update.

Google says about the Search Quality Evaluator Guidelines:

QUOTE: “Note: Some Webmasters have read these rating guidelines and have included information on their sites to influence your Page Quality rating!” Google

Surely – that’s NOT a bad thing, to make your site HIGHER QUALITY and correctly MARKETING your business to customers – and search quality raters, in the process.

Black hat search engine optimisers will obviously fake all that (which is why it would be self-defeating of me to publish a handy list of signals to manipulate SERPs that’s not just “unnatural links”).

Businesses that care about the performance in Google organic should be noting ALL the following points very carefully.

This isn’t about manipulating quality Raters – it is about making it EASY for them to realise you are a REAL business, with a GOOD REPUTATION, and have a LEVEL of EXPERTISE you wish to share with the world.

The aim is to create a good user experience, not fake it

How Reputable & User-Friendly Is Your Website?

You can help quality raters EASILY research the reputation of your website especially if you have any positive history.

Make “reputation information about the website” easy to access for a quality rater, as judging the reputation of your website is a large part of what they do.

You will need to monitor, or influence, ‘independent’ reviews about your business – because if reviews are particularly negative – Google will “trust the independent sources”.

Consider a page that highlights your good press, if you have any.

  • Google will consider “positive user reviews as evidence of positive reputation.” so come up with a way to get legitimate positive reviews – and starting on Google would be a good place to start.
  • Google states, “News articles, Wikipedia articles, blog posts, magazine articles, forum discussions, and ratings from independent organizations can all be sources of reputation information” but they also state specifically boasts about a lot of internet traffic, for example, should not influence the quality rating of a web page. What should influence the reputation of a page is WHO has shared it on social media etc. rather than just raw numbers of shares. CONSIDER CREATING A PAGE with nofollow links to good reviews on other websites as proof of excellence.
  • Google wants quality raters to examine subpages of your site and often “the URL of its associated homepage” so ensure your homepage is modern, up to date, informative and largely ON TOPIC with your internal pages.
  • Google wants to know a few things about your website, including:
    • Who is moderating the content on the site
    • Who is responsible for the website
    • Who owns copyright of the content
    • Business details (which is important to have synced and accurate across important social media profiles)
    • When was this content updated?
  • Be careful syndicating other people’s content. Algorithmic duplicate problems aside…..if there is a problem with that content, Google will hold the site it finds content on as ‘responsible’ for that content.
  • If you take money online, in any way, you NEED to have an accessible and satisfying ‘customer service’ type page. Google says, “Contact information and customer service information are extremely important for websites that handle money, such as stores, banks, credit card companies, etc. Users need a way to ask questions or get help when a problem occurs. For shopping websites, we’ll ask you to do some special checks. Look for contact information—including the store’s policies on payment, exchanges, and returns. “ Google urges quality raters to be a ‘detective’ in finding this information about you – so it must be important to them.
  • Keep web pages updated regularly and let users know when the content was last updated. Google wants raters to “search for evidence that effort is being made to keep the website up to date and running smoothly.
  • Google quality raters are trained to be sceptical of any reviews found. It’s normal for all businesses to have mixed reviews, but “Credible, convincing reports of fraud and financial wrongdoing is evidence of extremely negative reputation“.
  • Google asks quality raters to investigate your reputation by searching “giving the example [“” reviews –]: A search on Google for reviews of “” which excludes pages on” – So I would do that search yourself and judge for yourself what your reputation is. Very low ratings on independent websites could play a factor in where you rank in the future – ” with Google stating clearly “very low ratings on the BBB site to be evidence for a negative reputation“. Other sites mentioned to review your business include YELP and Amazon. Often – using rich snippets containing information – you can get Google to display user ratings in the actual SERPs. I noted you can get ‘stars in SERPs’ within two days after I added the code (March 2014).
  • If you can get a Wikipedia page – get one!. Keep it updated too. For the rest of us, we’ll just need to work harder to prove you are a real business that has earned its rankings.
  • If you have a lot of NEGATIVE reviews – expect to be treated as a business with an “Extremely negative reputation” – and back in 2013 – Google mentioned they had an algorithm for this, too. Google has said the odd bad review is not what this algorithm looks for, as bad reviews are a natural part of the web.
  • For quality raters, Google has a Page Quality Rating Scale with 5 rating options on a spectrum of “Lowest, Low, Medium, High, and Highest.”
  • Google says “High-quality pages are satisfying and achieve their purpose well” and has lots of “satisfying” content, written by an expert or authority in their field – they go on to include About Us information” pages, and easy to access “Contact or Customer Service information, etc.
  • Google is looking for a “website that is well cared for and maintained” so you need to keep content management systems updated, check for broken image links and HTML links. If you create a frustrating user experience through sloppy website maintenance – expect that to be reflected in some way with a lower quality rating. Google Panda October 2014 went for e-commerce pages that were optimised ‘the old way’ and are now classed as ‘thin content’.
  • Google wants raters to navigate your site and ‘test’ it out to see if it is working. They tell raters to check your shopping cart function is working properly, for instance.
  • Google expects pages to “be edited, reviewed, and updated on a regular basis” especially if they are for important issues like medical information, and states not all pages are held to such standards, but one can expect that Google wants information updated in a reasonable timescale. How reasonable this is, is dependant on the TOPIC and the PURPOSE of the web page RELATIVE to competing pages on the web.
  • Google wants to rank pages by expert authors, not from content farms.
  • You can’t have a great piece of content on a site with a negative reputation and expect it to perform well. A “High rating cannot be used for any website that has a convincing negative reputation.”
  • A very positive reputation can lift your content from “medium” to “high-quality“.
  • Google doesn’t care about ‘pretty‘ over substance and clearly instructs raters to “not rate based on how “nice” the page looks“.
  • Just about every webpage should have a CLEAR way to contact the site manager to achieve a high rating.
  • Highlighting ads in your design is BAD practice, and Google gives clear advice to rate the page LOW – Google wants you to optimise for A SATISFYING EXPERIENCE FIRST, CONVERSION SECOND! Conversion optimisers especially should take note of this, and aren’t we all?
  • Good news for web designers, content managers and search engine optimisers! ” Google clearly states, “If the website feels inadequately updated and inadequately maintained for its purpose, the Low rating is probably warranted.” although does stipulate again its horses for courses…..if everybody else is crap, then you’ll still fly – not much of those SERPs about these days.
  • If your intent is to deceive, be malicious or present pages with no purpose other than to monetise free traffic with no value ad – Google is not your friend.
  • Domains that are ‘related’ in Whois can lead to a low-quality score, so be careful how you direct people around multiple sites you own.
  • Keyword stuffing your pages is not recommended, even if you do get past the algorithms.
  • Quality raters are on the lookout for content that is “copied with minimal alteration” and crediting the original source is not a way to get around this. Google rates this type of activity low-quality.
  • How can Google trust a page if it is blocked from it or from reading critical elements that make up that page? Be VERY careful blocking Google from important directories (blocking CSS and .js files are very risky these days). REVIEW your ROBOTS.txt and know exactly what you are blocking and why you are blocking it.

Ratings Can Be Relative

It’s important to note your website quality is often judged on the quality of competing pages for this keyword phrase.

SEO is still a horserace.

A lot of this is all RELATIVE to what YOUR COMPETITION are doing.

How relative?

Big sites v small sites?

Sites with a lot of links v not a lot of links?

Big companies with a lot of staff v small companies with a few staff?

Do sites at the top of Google get asked more of? Algorithmically and manually? Just…. because they are at the top?

Whether its algorithmic or manual – based on technical, architectural, reputation or content – Google can decide and will decide if your site meets its quality requirements to rank on page one.

The likelihood of you ranking stable at number one is almost non-existent in any competitive niche where you have more than a few players aiming to rank number one.

Not en-masse, not unless you are bending the rules.

My own strategy for visibility over the last few years has been to avoid focusing entirely on ranking for particular keywords and rather improve the search experience of my entire website.

The entire budget of my time went on content improvement, content reorganisation, website architecture improvement, and lately, mobile experience improvement.

I have technical improvements to speed, usability and accessibility in the pipeline.

In simple terms, I took thin content and made it fat to make old content perform better.

Unsurprisingly, ranking fat content comes with its own challenges as the years go by.

Can Thin Content Still Rank In Google?


Ranking top depends on the query and level of competition for the query.

Google’s high-quality recommendations are often for specific niches and specific searches as most of the web would not meet the very highest requirements.

Generally speaking – real quality will stand out, in any niche with a lack of it, at the moment.

The time it takes for this to happen (at Google’s end) leaves a lot to be desired in some niches and time is something Google has an almost infinite supply of compared to 99% of the businesses on the planet.

What Are The High-Quality Characteristics of a Web Page?

QUOTE: “High quality pages are satisfying and achieve their purpose well.” Google Search Quality Evaluator Guidelines, 2017

The following are examples of what Google calls ‘high-quality characteristics’ of a page and should be remembered:

  • A satisfying or comprehensive amount of very high-quality” main content (MC)
  • Copyright notifications up to date
  • Functional page design
  • Page author has Topical Authority
  • High-Quality Main Content
  • Positive Reputation or expertise of website or author (Google yourself)
  • Very helpful SUPPLEMENTARY content “which improves the user experience.
  • Trustworthy
  • Google wants to reward ‘expertise’ and ‘everyday expertise’ or experience so you need to make this clear (perhaps using an Author Box or some other widget)
  • Accurate information
  • Ads can be at the top of your page as long as it does not distract from the main content on the page
  • Highly satisfying website contact information
  • Customised and very helpful 404 error pages
  • Awards
  • Evidence of expertise
  • Attention to detail

If Google can detect investment in time and labour on your site – there are indications that they will reward you for this (or at least – you won’t be affected when others are, meaning you rise in Google SERPs when others fall).

What Characteristics Do The Highest Quality Pages Exhibit?

QUOTE: “The quality of the MC is one of the most important criteria in Page Quality rating, and informs the E­A­T of the page. For all types of webpages, creating high quality MC takes a significant amount of at least one of the following: time, effort, expertise, and talent/skill.” Google Quality Evaluator Guidelines, 2017

You obviously want the highest quality ‘score’ possible but looking at the Search Quality Evaluator Guidelines that is a lot of work to achieve.

Google wants to rate you on the effort you put into your website, and how satisfying a visit is to your pages.

  1. Very high or highest quality MC, with demonstrated expertise, talent, and/or skill.
  2. “Very high level of expertise, authoritativeness, and trustworthiness (page and website) on the topic of the page.”
  3. “Very good reputation (website or author) on the topic of the page.”

At least for competitive niches were Google intend to police this quality recommendation, Google wants to reward high-quality pages and “the Highest rating may be justified for pages with a satisfying or comprehensive amount of very high-quality” main content.

If your main content is very poor, with “grammar, spelling, capitalization, and punctuation errors“, or not helpful or trustworthy – ANYTHING that can be interpreted as a bad user experience – you can expect to get a low rating.

QUOTE: “We will consider content to be Low quality if it is created without adequate time, effort, expertise, or talent/skill. Pages with low-quality (main content) do not achieve their purpose well.” Google Search Quality Evaluator Guidelines, 2017

Note – not ALL “thin-content” pages are automatically classed low-quality.

If you can satisfy the user with a page “thin” on content – you are ok (but probably susceptible to someone building a better page than your, more easily, I’d say).

Google expects more from big brands than they do from a smaller store (but that does not mean you shouldn’t be aiming to meet ALL these high-quality guidelines above).

NOte too, that if you violate Google Webmaster recommendations for performance in their indexes of the web – you automatically get a low-quality rating.

If your page has a sloppy design, low-quality main content and too many distracting ads your rankings are very probably going to take a nose-dive.

If a Search Quality Evaluator is subject to a sneaky redirect they are instructed to rate your site low.

What Are The Low-Quality Signals Google Looks For?

QUOTE: “Low quality pages are unsatisfying or lacking in some element that prevents them from achieving their purpose well. These pages lack expertise or are not very trustworthy/authoritative for the purpose of the page.” Google Quality Evaluator Guidelines, 2017

These include but are not limited to:

  1. Lots of spammy comments
  2. Low-quality content that lacks EAT signal (Expertise + Authority + Trust”)
  3. NO Added Value for users
  4. Poor page design
  5. Malicious harmful or deceptive practices detected
  6. Negative reputation
  7. Auto-generated content
  8. No website contact information
  9. Fakery or INACCURATE information
  10. Untrustworthy
  11. Website not maintained
  12. Pages just created to link to others
  13. Pages lack purpose
  14. Keyword stuffing
  15. Inadequate customer service pages
  16. Sites that use practices Google doesn’t want you to use

Pages can get a neutral rating too.

Pages that have “Nothing wrong, but nothing special” about them don’t “display characteristics associated with a High rating” and puts you in the middle ground – probably not a sensible place to be a year or so down the line.

Pages Can Be Rated ‘Medium Quality’

QUOTE: “Medium pages achieve their purpose and have neither high nor low expertise, authoritativeness, and trustworthiness. However, Medium pages lack the characteristics that would support a higher quality rating. Occasionally, you will find a page with a mix of high and low quality characteristics. In those cases, the best page quality rating may be Medium.” Google Quality Evaluator Guidelines, 2017

Quality raters will rate content as medium rating when the author or entity responsible for it is unknown.

If you have multiple editors contributing to your site, you had better have a HIGH EDITORIAL STANDARD.

One could take from all this that Google Quality raters are out to get you if you manage to get past the algorithms, but equally, Google quality raters could be friends you just haven’t met yet.

Somebody must be getting rated highly, right?

Impress a Google Quality rater and get a high rating.

If you are a spammer you’ll be pulling out the stops to fake this, naturally, but this is a chance for real businesses to put their best foot forward and HELP quality raters correctly judge the size and relative quality of your business and website.

Real reputation is hard to fake – so if you have it – make sure it’s on your website and is EASY to access from contact and about pages.

The quality raters handbook is a good training guide for looking for links to disavow, too.

It’s pretty clear.

Google organic listings are reserved for ‘remarkable’ and reputable’ content, expertise and trusted businesses.

A high bar to meet – and one that is designed for you to never quite meet unless you are serious about competing, as there is so much work involved.

I think the inferred message is to call your Adwords rep if you are an unremarkable business.

Google Quality Algorithm Updates

QUOTE: “We have 3 updates a day in average. I think it’s pretty safe to assume there was one recently…” Gary Illyes, Google 2017

Google has many algorithm updates during a year.

These ‘quality updates’ are very reminiscent of Google Panda updates and often impact many websites at the same time – and often these focus on demoting similar ‘low-quality’ SEO techniques we have been told Panda focuses on.

Usually, Google has 3 or 4 big updates in a year that focus on various things, but they also make changes daily. See this list for a comprehensive list of Google algorithm updates.

What Is Google Panda?

QUOTE: “Panda is an algorithm that’s applied to sites overall and has become one of our core ranking signals. It measures the quality of a site, which you can read more about in our guidelines. Panda allows Google to take quality into account and adjust ranking accordingly.” Google

Google Panda aims to rate the quality of your pages and website and is based on things about your site that Google can rate, or algorithmically identify.

QUOTE: “(Google Panda) measures the quality of a site pretty much by looking at the vast majority of the pages at least. But essentially allows us to take quality of the whole site into account when ranking pages from that particular site and adjust the ranking accordingly for the pages. So essentially, if you want a blunt answer, it will not devalue, it will actually demote. Basically, we figured that site is trying to game our systems, and unfortunately, successfully. So we will adjust the rank. We will push the site back just to make sure that it’s not working anymore.”  Gary Illyes – Search Engine Land

We are told the current Panda is an attempt to basically stop low-quality thin content pages ranking for keywords they shouldn’t rank for.

Panda evolves – signals can come and go – Google can get better at determining quality as a spokesman from Google has confirmed :

QUOTE: So it’s not something where we’d say, if your website was previously affected, then it will always be affected. Or if it wasn’t previously affected, it will never be affected.… sometimes we do change the criteria…. category pages…. (I) wouldn’t see that as something where Panda would say, this looks bad.… Ask them the questions from the Panda blog post….. usability, you need to work on.“ John Mueller, Google.

In my notes about Google Penguin, I list the original, somewhat abstract and metaphorical Panda ranking ‘factors’ published as a guideline for creating high-quality pages. I also list these Panda points below:

(PS – I have emphasised two of the bullet points below, at the top and bottom because I think it’s easier to understand these points as a question, how to work that question out, and ultimately, what Google really cares about – what their users think.

  • Would you trust the information presented in this article? (YES or NO)
  • Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature? EXPERTISE
  • Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations? LOW-QUALITY CONTENT/THIN CONTENT
  • Would you be comfortable giving your credit card information to this site? (HTTPS? OTHER TRUST SIGNALS (CONTACT/ABOUT / PRIVACY / COPYRIGHT / DISCLOSURES / DISCLAIMERS etc. epsecially relelvant if your site is a YMYL page.)
  • Does this article have spelling, stylistic, or factual errors? (SPELLING + GRAMMAR + CONTENT QUALITY – perhaps wrong dates in content, on old articles, for instance)
  • Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines? (OLD SEO TACTICS|DOORWAY PAGES)
  • Does the article provide original content or information, original reporting, original research, or original analysis? (UNIQUE CONTENT, ORIGINAL RESEARCH & SATISFYING CONTENT)
  • Does the page provide substantial value when compared to other pages in search results? (WHAT’S THE RELATIVE QUALITY OF COMPETITION LIKE FOR THIS TERM?)
  • How much is quality control done on content? (WHEN WAS THIS LAST EDITED? Is CONTENT OUTDATED? IS SUPPLEMENTARY CONTENT OUTDATED (External links and images?))
  • Does the article describe both sides of a story? (IS THIS A PRESS RELEASE?)
  • Is the site a recognized authority on its topic? (EXPERTISE, AUTHORITY, TRUST)
  • Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care? (IS THIS CONTENT BOUGHT FROM A $5 per article content factory? Or is written by an EXPERT or someone with a lot of EXPERIENCE of the subject matter?)
  • Was the article edited well, or does it appear sloppy or hastily produced? (QUALITY CONTROL on EDITORIALS)
  • For a health related query, would you trust information from this site? (EXPERTISE NEEDED)
  • Would you recognize this site as an authoritative source when mentioned by name? (EXPERTISE NEEDED)
  • Does this article provide a complete or comprehensive description of the topic? (Is the page text designed to help a visitor or shake them down for their cash?)
  • Does this article contain insightful analysis or interesting information that is beyond obvious? (LOW QUALITY CONTENT – You know it when you see it)
  • Is this the sort of page you’d want to bookmark, share with a friend, or recommend? (Would sharing this page make you look smart or dumb to your friends? This should be reflected in social signals)
  • Does this article have an excessive amount of ads that distract from or interfere with the main content? (OPTIMISE FOR SATISFACTION FIRST – CONVERSION SECOND – do not let the conversion get in the way of satisfying the INTENT of the page. For example – if you rank with INFORMATIONAL CONTENT with a purpose to SERVE those visitors – the visitor should land on your destination page and not be deviated from the PURPOSE of the page – and that was informational, in this example – to educate. SO – educate first – beg for social shares on those articles – and leave the conversion on Merit and slightly more subtle influences rather than massive banners or whatever that annoy users). We KNOW ads (OR DISTRACTING CALL TO ACTIONS) convert well at the top of articles – but Google says it is sometimes a bad user experience. You run the risk of Google screwing with your rankings as you optimise for conversion so be careful and keep everything simple and obvious.
  • Would you expect to see this article in a printed magazine, encyclopedia or book? (Is this a HIGH-QUALITY article?)… no? then expect ranking problems.
  • Are the articles short, unsubstantial, or otherwise lacking in helpful specifics? (Is this a LOW or MEDIUM QUALITY ARTICLE? LOW WORD COUNTS ACROSS PAGES?)
  • Are the pages produced with great care and attention to detail vs. less attention to detail? (Does this page impress?)
  • Would users complain when they see pages from this site? (WILL THIS PAGE MAKE GOOGLE LOOK STUPID IF IT RANKS TOP?)

All that sits quite nicely with the information you can read in the Google Search Quality Evaluator Guidelines.

If you fail to meet these standards (even some) your rankings can fluctuate wildly (and often, as Google updates Panda every month we are told and often can spot rolling in).

It all probably correlates quite nicely too, with the type of sites you don’t want links from.

QUOTE: “I think there is probably a misunderstanding that there’s this one site-wide number that Google keeps for all websites and that’s not the case.  We look at lots of different factors and there’s not just this one site-wide quality score that we look at. So we try to look at a variety of different signals that come together, some of them are per page, some of them are more per site, but it’s not the case where there’s one number and it comes from these five pages on your website.” John Mueller, Google

Google is raising the quality bar, and forcing optimisers and content creators to spend HOURS, DAYS or WEEKS longer on websites if they ‘expect’ to rank HIGH in natural results.

If someone is putting the hours into rank their site through legitimate efforts – Google will want to reward that – because it keeps the barrier to entry HIGH for most other competitors.

Critics will say the higher the barrier to entry is to rank high in Google natural listings the more attractive Google Adwords begins to look to those other businesses.

Google says a quality rater does not affect your site, but if your site gets multiple LOW-QUALITY notices from manual reviewers – that stuff is coming back to get you later, surely.

Identifying Which Pages On Your Own Site Hurt Or Help Your Rankings

Screenshot 2015-11-18 05.03.08

Separating the wheat from the chaff.

Being ‘indexed’ is important. If a page isn’t indexed, the page can’t be returned by Google in Search Engine Results Pages.

While getting as many pages indexed in Google was historically a priority for an SEO, Google is now rating the quality of pages on your site and the type of pages it is indexing.

So bulk indexation is no guarantee of success – in fact, it’s a risk in 2018 to index all URLs on your site, especially if you have a large, sprawling site.

If you have a lot of low-quality pages (URLs) indexed on your site compared to high-quality pages (URLs)… Google has told us it is marking certain sites down for that.

Some URLs are just not welcome to be indexed as part of your website content anymore.

Do I need to know which pages are indexed?

No. Knowing is useful, of course, but largely unnecessary. Indexation is never a guarantee of traffic.

Some SEO would tend to scrape Google to get indexation data on a website. I’ve never bothered with that. Most sites I work with have XML sitemap files, so an obvious place to start to look at such issues is Google Search Console.

Google will tell you how many pages you have submitted in a sitemap, and how many pages are indexed. It will not tell you which pages are indexed, but if there is a LARGE discrepancy between SUBMITTED and INDEXED, it’s very much worth digging deeper.

Screenshot 2015-11-18 13.10.43

If Google is de-indexing large swaths of your content that you have actually submitted as part of an XML sitemap, then a problem is often afoot.

Unfortunately, with this method, you don’t get to see the pages produced by the CMS out with the XML sitemap – so this is not a full picture of the ‘health’ of your website.

Read my article on how to get your entire website crawled and indexed by Google.

Identifying Dead Pages

I usually start with a performance analysis that involves merging data from a physical crawl of a website with analytics data and Webmaster tools data. A content type analysis will identify the type of pages the cms generates. A content performance analysis will gauge how well each section of the site performs.

If you have 100,000 pages on a site, and only 1,000 pages get organic traffic from Google over a 3-6 month period – you can make the argument 99% of the site is rated as ‘crap’ (at least as far as Google rates pages these days).

I group pages like these together as ‘dead pages‘ for further analysis. Deadweight, ‘dead’ for short.

The thinking is if the pages were high-quality, they would be getting some kind of organic traffic.

Identifying which pages receive no organic visitors over a sensible timeframe is a quick if noisy, way to separate pages that obviously WORK from pages that DONT – and will help you clean up a large portion of redundant URLs on the site.

It helps to see page performance in the context of longer timeframes as some types of content can be seasonal, for instance, and produce false positives over a shorter timescale. It is important to trim content pages carefully – and there are nuances.

False Positives

Experience can educate you when a page is high-quality and yet receives no traffic. If the page is thin, but is not manipulative, is indeed ‘unique’ and delivers on a purpose with little obvious detectable reason to mark it down, then you can say it is a high-quality page – just with very little search demand for it. Ignored content is not the same as ‘toxic’ content.

False positives aside, once you identify the pages receiving no traffic, you very largely isolate the type of pages on your site that Google doesn’t rate – for whatever reason. A strategy for these pages can then be developed.

Identifying Content That Can Potentially Hurt Your Rankings

As you review the pages, you’re probably going to find pages that include:

  • out of date, overlapping or irrelevant content
  • collections of pages not paginated properly
  • indexable pages that shouldn’t be indexed
  • stub pages
  • indexed search pages
  • pages with malformed HTML and broken images
  • auto-generated pages with little value

You will probably find ‘dead’ pages you didn’t even know your cms produced (hence why an actual crawl of your site is required, rather than just working from a lit of URLs form an XML sitemap, for instance).

Those pages need to be cleaned up, Google has said. And remaining pages should:

QUOTE: “stand on their own” John Mueller, Google

Google doesn’t approve of most types of auto-generated pages in 2018, so you don’t want Google indexing these pages in a normal fashion.

Judicious use of ‘noindex,follow‘ directive in robots meta tags, and sensible use of the canonical link element are required implementation on most sites I see these days.

The pages that remain after a URL clear-out, can be reworked and improved.

In fact – they MUST BE improved if you are to win more rankings and get more Google organic traffic in future.

This is time-consuming – just like Google wants it to be. You need to review DEAD pages with a forensic eye and ask:

  • Are these pages high-quality and very relevant to a search term?
  • Do these pages duplicate content on the pages on the site?
  • Are these pages automatically generated, with little or no unique text content on them?
  • Is the purpose of this page met WITHOUT sending visitors to another page e.g. doorway pages?
  • Will these pages ever pick up natural links?
  • Is the intent of these pages to inform first? ( or profit from organic traffic through advertising?)
  • Are these pages FAR superior to the competition in Google presently for the search term you want to rank? This is actually very important.

If the answer to any of the above is NO – then it is imperative you take action to minimise the amount of these types of pages on your site.

What about DEAD pages with incoming backlinks or a lot of text content?

Bingo! Use 301 redirects (or use canonical link elements) to redirect any asset you have with some value to Googlebot to equivalent, up to date sections on your site. Do NOT just redirect these pages to your homepage.

Rework available content before you bin it

Screenshot 2016-05-25 19.32.25

High-quality content is expensive – so rework content when it is available. Medium quality content can always be made higher quality – in fact – a page is hardly ever finished in 2018. EXPECT to come back to your articles every six months to improve them to keep them moving in the right direction.

Sensible grouping of content types across the site can often leave you with substantial text content that can be reused and repackaged in a way that the same content originally spread over multiple pages, now consolidated into one page reworked and shaped around a topic, has a considerably much more successful time of it in Google SERPs in 2018.

Well, it does if the page you make is useful and has a purpose other than just to make money.

REMEMBER – DEAD PAGES are only one aspect of a site review. There’s going to be a large percentage of any site that gets a little organic traffic but still severely underperforms, too – tomorrows DEAD pages. I call these POOR pages in my reviews.

Specific Advice From Google on Pruning Content From Your Site

If you have very low-quality site form a content point of view, just deleting the content (or noindexing it) is probably not going to have a massive positive impact on your rankings.

Ultimately the recommendation in 2018 is to focus on “improving content” as “you have the potential to go further down if you remove that content”.

QUOTE: “Ultimately, you just want to have a really great site people love. I know it sounds like a cliché, but almost [all of] what we are looking for is surely what users are looking for. A site with content that users love – let’s say they interact with content in some way – that will help you in ranking in general, not with Panda. Pruning is not a good idea because with Panda, I don’t think it will ever help mainly because you are very likely to get Panda penalized – Pandalized – because of low-quality content…content that’s actually ranking shouldn’t perhaps rank that well. Let’s say you figure out if you put 10,000 times the word “pony” on your page, you rank better for all queries. What Panda does is disregard the advantage you figure out, so you fall back where you started. I don’t think you are removing content from the site with potential to rank – you have the potential to go further down if you remove that content. I would spend resources on improving content, or, if you don’t have the means to save that content, just leave it there. Ultimately people want good sites. They don’t want empty pages and crappy content. Ultimately that’s your goal – it’s created for your users.” Gary Illyes, Google 2017

Specific Advice From Google On Low-Quality Content On Your Site

And remember the following, specific advice from Google on removing low-quality content from a domain:

******” Quote from Google: One other specific piece of guidance we’ve offered is that low-quality content on some parts of a website can impact the whole site’s rankings, and thus removing low-quality pages, merging or improving the content of individual shallow pages into more useful pages, or moving low-quality pages to a different domain could eventually help the rankings of your higher-quality content. GOOGLE ******

Minimise Low-Quality Content & Overlapping Text Content

Google may well be able to recognise ‘low-quality’ a lot better than it does ‘high-quality’ – so having a lot of ‘low-quality’ pages on your site is potentially what you are actually going to be rated on (if it makes up most of your site) – now, or in the future. NOT your high-quality content.

This is more or less explained by Google spokespeople like John Mueller. He is constantly on about ‘folding’ thin pages together, these days (and I can say that certainly has a positive impact on many sites).

While his advice in this instance might be specifically about UGC (user-generated content like forums) – I am more interested in what he has to say when he talks about the algorithm looking at the site “overall” and how it ‘thinks’ when it finds a mixture of high-quality pages and low-quality pages.

And Google has clearly said in print:

QUOTE: “low-quality content on part of a site can impact a site’s ranking as a whole.” Google

Avoid Google’s punitive algorithms

Fortunately, we don’t actually need to know and fully understand the ins-and-outs of Google’s algorithms to know what the best course of action is.

The sensible thing in light of Google’s punitive algorithms is just to not let Google index (or more accurately, rate) low-quality pages on your site. And certainly – stop publishing new ‘thin’ pages. Don’t put your site at risk.

If pages get no organic traffic anyway, are out-of-date for instance, and improving them would take a lot of effort and expense, why let Google index them normally, if by rating them it impacts your overall score? Clearing away the low-quality stuff lets you focus on building better stuff on other pages that Google will rank in 2018 and beyond.

Ideally, you would have a giant site and every page would be high-quality – but that’s not practical.

A myth is that pages need a lot of text to rank. They don’t, but a lot of people still try to make text bulkier and unique page-to-page.

While that theory is sound (when focused on a single page, when the intent is to deliver utility content to a Google user) using old school SEO techniques on especially a large site spread out across many pages seems to amplify site quality problems, after recent algorithm changes, and so this type of optimisation without keeping an eye on overall site quality is self-defeating in the long run.

If you want assistance with this type of analysis for your own website, email me here or you can buy a  review of your website online here.

Investigating A Traffic Drop

Huge drop in traffic from Google on May 2015

Every site is impacted by how highly Google rates it.

There are many reasons a website loses traffic from Google. Server changes, website problems, content changes, downtimes, redesigns, migrations… the list is extensive.

Sometimes, Google turns up the dial on demands on ‘quality’, and if your site falls short, a website traffic crunch is assured. Some sites invite problems ignoring Google’s ‘rules’ and some sites inadvertently introduce technical problems to their site after the date of a major algorithm update and are then impacted negatively by later refreshes of the algorithm.

Comparing your Google Analytics data side by side with the dates of official algorithm updates is useful in diagnosing a site health issue or traffic drop. In the above example, a new client thought it was a switch to HTTPS and server downtime that caused the drop when it was actually the May 6, 2015, Google Quality Algorithm (originally called Phantom 2 in some circles) that caused the sudden drop in organic traffic – and the problem was probably compounded by unnatural linking practices. (This client did eventually receive a penalty for unnatural links when they ignored our advice to clean up).

Thin Content

A quick check of how the site was laid out soon uncovered a lot of unnecessary pages, or what Google calls thin, overlapping content. This observation would go a long way to confirming that the traffic drop was indeed caused by the May algorithm change.

Another obvious way to gauge the health of a site is to see which pages on the site get zero traffic from Google over a certain period of time. I do this by merging analytics data with crawl data – as analytics doesn’t give you data on pages it sends no traffic to.

Often, this process can highlight low-quality pages on a site.

Screenshot 2015-11-12 03.32.40

Google calls a lot of pages ‘thin’ or ‘overlapping’ content these days. I go into some of that in my duplicate content penalty post.

Algorithm Changes

Algorithm changes in 2018 seem to centre on reducing the effectiveness of old-school SEO techniques, with the May 2015 Google ‘Quality’ algorithm update bruisingly familiar. An algorithm change is usually akin to ‘community service’ for the business impacted negatively.

If your pages were designed to get the most out of Google, with commonly known and now outdated SEO techniques chances are Google has identified this and is throttling your rankings in some way. Google will continue to throttle rankings until you clean your pages up.

If Google thinks your links are manipulative, they want them cleaned up, too.

Actually – looking at the backlink profile of this customer, they are going to need a disavow file prepared too.

Screenshot 2015-11-12 03.40.51

That is unsurprising in today’s SEO climate.

What could be argued was ‘highly relevant’ or ‘optimised’ on-site SEO for Google just a few years ago is now being treated more like ‘web spam’ by punitive algorithms, rather than just ‘over-optimisation’.

Google went through the SEO playbook and identified old techniques and use them against you today – meaning every SEO job you take on always has a clean up aspect now.

Google has left a very narrow band of opportunity when it comes to SEO – and punishments are designed to take you out of the game for some time while you clean up the infractions.

Technical Issues

Screenshot 2015-11-12 15.45.51

Google has a LONG list of technical requirements it advises you meet, on top of all the things it tells you NOT to do to optimise your website. Meeting Google’s technical guidelines is no magic bullet to success – but failing to meet them can impact your rankings in the long run – and the odd technical issue can actually severely impact your entire site if rolled out across multiple pages.

The benefit of adhering to technical guidelines is often a second order benefit.

You won’t get penalised or filtered when others do. When others fall, you will rise.

Mostly – individual technical issues will not be the reason you have ranking problems, but they still need to be addressed for any second-order benefit they provide.

Google spokespeople say ‘user-experience’ is NOT A RANKING FACTOR but this might be splitting hairs as lots of the rules are designed to guarantee a good a ‘user experience’ as possible for Google’s users.

Most of Google’s technical guidelines can be interpreted in this way. And most need to be followed, whether addressing these issues has an immediate positive impact on the site or not.

Whether or not your site has been impacted in a noticeable way by these algorithms, every SEO project must start with a historical analysis of site performance. Every site has things to clean up and to optimise in a modern way.

The sooner you understand why Google is sending you less traffic than it did last year, the sooner you can clean it up and focus on proactive SEO that starts to impact your rankings in a positive way.

Technical SEO

If you are doing a professional SEO audit for a real business, you are going to have to think like a Google Search Quality Rater AND a Google search engineer to provide real long-term value to a client.

When making a site for Google in 2018, you really need to understand that Google has a long list of things it will mark sites down for, and that’s usually old-school SEO tactics which are now classed as ‘webspam‘.

Conversely, sites that are not marked “low-quality” are not demoted and so will improve in rankings. Sites with higher rankings often pick up more organic links, and this process can float high-quality pages on your site quickly to the top of Google.

So the sensible thing for any webmaster is to NOT give Google ANY reason to DEMOTE a site. Tick all the boxes Google tell you to tick, so to speak.

I have used this simple (but longer term) strategy to rank on page 1 or thereabouts for ‘SEO’ in the UK over the last few years, and drive 100 thousand relevant organic visitors to this site, every month, to only about 100 pages, without building any links over the last few years (and very much working on it part-time):

Screenshot 2016-03-17 18.10.16

Example ‘High Quality’ E-commerce Site

Google has the search quality rating guidelines. After numerous ‘leaks’, this previously ‘secretive’ document has now been made available for anyone to download.

This document gives you an idea of the type of quality websites Google wants to display in its search engine results pages.

I use these quality rating documents and the Google Webmaster Guidelines as the foundation of my audits for e-commerce sites.

What are these quality raters doing?

Quality Raters are rating Google’s ‘experiments’ and manually reviewing web pages that are presented to them in Google’s search engine results pages (SERPs). We are told that these ratings don’t impact your site, directly.

QUOTE: “Ratings from evaluators do not determine individual site rankings, but are used help us understand our experiments. The evaluators base their ratings on guidelines we give them; the guidelines reflect what Google thinks search users want.” GOOGLE.

What Does Google class as a high-quality product page on an e-commerce site?


This page and site appear to check all the boxes Google wants to see in a high-quality e-commerce website these days.

This product page is an example of YMYL page exhibiting “A satisfying or comprehensive amount of very high-quality MC (main content)” and “Very high level of expertise, highly authoritative/highly trustworthy for the purpose of the page” with a “Very positive reputation“.

Highest Quality Rating: Shopping (YMYL Page Example)

What Are YMYL Pages?

Google classifies web pages that “potentially impact the future happiness, health, or wealth of users” as “Your Money or Your Life” pages (YMYL) and hold these types of pages to higher standards than, for instance, hobby and informational sites.

Essentially, if you are selling something to visitors or advising on important matters like finance, law or medical advice – your page will be held to this higher standard.

This is a classification of certain pages by Google:

QUOTE: “Your Money or Your Life (YMYL) Pages.

where Google explains:

QUOTE: “Some types of pages could potentially impact the future happiness, health, or wealth of users. We call such pages “Your Money or Your Life” pages, or YMYL

and in this instance, it refers to a very common type of page:

QUOTE: “Shopping or financial transaction pages: webpages which allow users to make purchases, transfer money, pay bills, etc. online (such as online stores and online banking pages)…..We have very high Page Quality rating standards for YMYL pages because low-quality YMYL pages could potentially negatively impact users’ happiness, health, or wealth.

It is interesting to note that 1. This example of a ‘high-quality’ website in the guidelines is from 2013 and 2. The website looks different today.

But – this is a clear example of the kind of ‘user experience’ you are trying to mimic if you have an online e-commerce store and want more Google organic traffic product pages.

You might not be able to mimic the positive reputation this US site has, but you are going to have to build your product pages to compete with it, and others like it.

What Is “Domain Authority“?

QUOTE: “So domain authority is kind of a theme picked up by SEO companies or SEO tools. So it’s not really something that we have here at Google.” John Mueller, Google 2017

Domain authority, whether or not is something Google has or not, is an important concept to take note of. Essentially Google ‘trusts’ some websites more than others and you will find that it is easier to rank using some websites than it is others.

SEOs conveniently call this effect ‘domain authority’ and it seemed to be related to ‘PageRank’ – the system Google started to rank the web within 1998.

QUOTE: “PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page’s value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B.” Google

Domain authority is an important ranking phenomenon in Google. Nobody knows exactly how Google calculates, ranks and rates the popularity, reputation, intent or trust of a website, outside of Google, but when I write about domain authority I am generally thinking of sites that are popular, reputable and trusted – all of which can be faked, of course.

It’s a useful metaphor and proxy for quality and sometimes you can use it to work out the likelihood of a site ranking for a particular keyword based on its relative score when compared to competing sites and pages.

Historically sites that had domain authority or online business authority had lots of links to them, hence why link building was so popular a tactic – and counting these links is generally how most 3rd party tools still calculate it a pseudo domain authority score for websites today.

Massive domain authority and ranking ‘trust’ was awarded to very successful sites that had gained a lot of links from credible sources, and other online business authorities too.

Google calls it, ‘online business authority

QUOTE: “Amazon has a lot of “online business authority””…. (Official Google Webmaster Blog)

SEO more usually talk about domain trust and domain authority based on the number, type and quality of incoming links to a site.

Examples of trusted, authority domains include Wikipedia, the W3C and Apple. e.g.: these are very successful brands.

How did you take advantage of being an online business authority? You turned the site into an SEO Black Hole to horde the benefits of domain authority and published lots of content sometimes with little thought to quality. On any subject. Because Google would rank it!

I think this ‘quality score’ Google has developed since 2010 could be Google’s answer to this sort of historical domain authority abuse.

QUOTE: “And we actually came up with a classifier to say, okay, IRS or Wikipedia or New York Times is over on this side, and the low-quality sites are over on this side. And you can really see mathematical reasons…Matt Cutts 2011


QUOTE: “The score is determined from quantities indicating user actions of seeking out and preferring particular sites and the resources found in particular sites. *****A site quality score for a particular site**** can be determined by computing a ratio of a numerator that represents user interest in the site as reflected in user queries directed to the site and a denominator that represents user interest in the resources found in the site as responses to queries of all kinds The site quality score for a site can be used as a signal to rank resources, or to rank search results that identify resources, that are found in one site relative to resources found in another site.” Navneet Panda, Google Patent

An effect akin to ‘domain authority’ is still visible in 2018 but this new phenomenon is probably based on site quality scores, potential authorship value scores, user interest and other classifiers, as well as Pagerank.

This takes a lot of work and a lot of time to create, or even mimic, such a site.

QUOTE: “Brands are the solution, not the problem, Brands are how you sort out the cesspool.” Eric Schmidt, Google

Google is going to present users with sites that are recognisable to them. If you are a ‘brand’ in your space or well-cited site, Google wants to rank your stuff at the top as it won’t make Google look stupid.

Getting links from ‘Brands’ (or well-cited websites) in niches can also mean getting  ‘quality links’.

Easier said than done, for most, of course, but that is the point of link building – to get these type of links.

Does Google Prefer Big Brands In Organic SERPs?

Well, yes. It’s hard to imagine that a system like Google’s was not designed exactly over the last few years to deliver the listings it does today – and it is often filled even in 2018 with content that ranks high likely because of the domain the content is on.

Big brands also find it harder to take advantage of ‘domain authority’ in 2018. Its harder for most businesses because low-quality content on parts of a domain can negatively impact the rankings of an entire domain.

QUOTE: “I mean it’s kind of like we look at your web site overall. And if there’s this big chunk of content here or this big chunk kind of important wise of your content, there that looks really iffy, then that kind of reflects across the overall picture of your website. ”  John Mueller, Google

Google has introduced (at least) a ‘percieved’ risk to publishing lots of lower-quality pages on your site to in an effort to curb production of old-style SEO friendly content based on manipulating early search engine algorithms.

We are dealing with new algorithms designed to target old style SEO tactics and that focus around the truism that DOMAIN ‘REPUTATION’ plus LOTS of PAGES plus SEO equals LOTS of Keywords equals LOTS of Google traffic.

A big site can’t just get away with publishing LOTS of lower quality content in the cavalier way they used to – not without the ‘fear’ of primary content being impacted and organic search traffic throttled negatively to important pages on the site.

Google still uses links and PageRank:

QUOTE: “DYK that after 18 years we’re still using* PageRank (and 100s of other signals) in ranking?” Gary Illyes, Google 2017

Google is very probably also using user metrics in some way to determine the ‘quality’ of your site:

QUOTE: “I think that’s always an option. Yeah. That’s something that–I’ve seen sites do that across the board,not specifically for blogs, but for content in general, where they would regularly go through all of their content and see, well, this content doesn’t get any clicks, or everyone who goes there kind of runs off screaming.” John Mueller, Google 

Your online reputation is evidently calculated by Google from many metrics.

Small businesses can still build this ‘domain reputation’ over time if you focus on a content strategy based on depth and quality, rather than breadth when it comes to how content is published on your website.

I did, on my own site.

Instead of publishing LOTS of pages, focus on fewer pages that are of high quality. You can better predict your success in ranking for the long term for a particular keyword phrase this way.

Then you rinse and repeat.

Failure to meet these standards for quality content may impact rankings noticeably around major Google quality updates.

Is Domain Age An Important Google Ranking Factor?

No, not in isolation.

Having a ten-year-old domain that Google knows nothing about is almost the same as having a brand new domain.

A 10-year-old site that’s continually cited by, year on year, the actions of other, more authoritative, and trusted sites? That’s valuable.

But that’s not the age of your website address ON ITS OWN in-play as a ranking factor.

A one-year-old domain cited by authority sites is just as valuable if not more valuable than a ten-year-old domain with no links and no search-performance history.

Perhaps Domain age may come into play when other factors are considered – but I think Google works very much like this on all levels, with all ‘Google ranking factors‘, and all ranking ‘conditions’.

I don’t think you can consider discovering ‘ranking factors’ without ‘ranking conditions’.

Other Ranking Factors:

  1. Domain age; (NOT ON ITS OWN)
  2. Length of site domain registration; (I don’t see much benefit ON IT”S OWN even knowing “Valuable (legitimate) domains are often paid for several years in advance, while doorway (illegitimate) domains rarely are used for more than a year.”) – paying for a domain in advance just tells others you don’t want anyone else using this domain name, it is not much of an indication that you’re going to do something Google cares about).
  3. Domain registration information was hidden/anonymous; (possibly, under human review if OTHER CONDITIONS are met like looking like a spam site)
  4. Site top level domain (geographical focus, e.g. com versus; (YES)
  5. Site top level domain (e.g. .com versus .info); (DEPENDS)
  6. Subdomain or root domain? (DEPENDS)
  7. Domain past records (how often it changed IP); (DEPENDS)
  8. Domain past owners (how often the owner was changed) (DEPENDS)
  9. Keywords in the domain; (DEFINITELYESPECIALLY EXACT KEYWORD MATCH – although Google has a lot of filters that mute the performance of an exact match domain in 2018))
  10. Domain IP; (DEPENDS – for most, no)
  11. Domain IP neighbours; (DEPENDS – for most, no)
  12. Domain external mentions (non-linked citations) (perhaps)
  13. Geo-targeting settings in Google Webmaster Tools (YES – of course)

Google Penalties For Unnatural Footprints

In 2018, you need to be aware that what works to improve your rank can also get you penalised (faster, and a lot more noticeably).

In particular, the Google web spam team is currently waging a PR war on sites that rely on unnatural links and other ‘manipulative’ tactics (and handing out severe penalties if it detects them). And that’s on top of many algorithms already designed to look for other manipulative tactics (like keyword stuffing or boilerplate spun text across pages).

Google is making sure it takes longer to see results from black and white hat SEO, and intent on ensuring a flux in its SERPs based largely on where the searcher is in the world at the time of the search, and where the business is located near to that searcher.

There are some things you cannot directly influence legitimately to improve your rankings, but there is plenty you CAN do to drive more Google traffic to a web page.

Ranking Factors

Google has HUNDREDS of ranking factors with signals that can change daily, weekly, monthly or yearly to help it work out where your page ranks in comparison to other competing pages in SERPs.

You will not ever find every ranking factor. Many ranking factors are on-page or on-site and others are off-page or off-site. Some ranking factors are based on where you are, or what you have searched for before.

I’ve been in online marketing for 15 years. In that time, a lot has changed. I’ve learned to focus on aspects that offer the greatest return on investment of your labour.

Read my article on a more complete list of potential Google ranking factors.

Learn SEO Basics

Here are few simple SEO tips to begin with:

  • If you are just starting out, don’t think you can fool Google about everything all the time. Google has VERY probably seen your tactics before. So, it’s best to keep your plan simple. GET RELEVANT. GET REPUTABLE. Aim for a healthy, satisfying visitor experience. If you are just starting out – you may as well learn how to do it within Google’s Webmaster Guidelines first. Make a decision, early, if you are going to follow Google’s guidelines, or not, and stick to it. Don’t be caught in the middle with an important project. Do not always follow the herd.
  • If your aim is to deceive visitors from Google, in any way, Google is not your friend. Google is hardly your friend at any rate – but you don’t want it as your enemy. Google will send you lots of free traffic though if you manage to get to the top of search results, so perhaps they are not all that bad.
  • A lot of optimisation techniques that are in the short term effective at boosting a site’s position in Google are against Google’s guidelines. For example, many links that may have once promoted you to the top of Google, may, in fact, today be hurting your site and its ability to rank high in Google. Keyword stuffing might be holding your page back. You must be smart, and cautious, when it comes to building links to your site in a manner that Google *hopefully* won’t have too much trouble with, in the FUTURE. Because they will punish you in the future.
  • Don’t expect to rank number 1 in any niche for a competitive keyword phrase without a lot of investment and work. Don’t expect results overnight. Expecting too much too fast might get you in trouble with the Google webspam team.
  • You don’t pay anything to get into Google, Yahoo or Bing natural, or free listings. It’s common for the major search engines to find your website pretty quickly by themselves within a few days. This is made so much easier if your cms actually ‘pings’ search engines when you update content (via XML sitemaps or RSS for instance).
  • To be listed and rank high in Google and other search engines, you really should consider and mostly abide by search engine rules and official guidelines for inclusion. With experience and a lot of observation, you can learn which rules can be bent, and which tactics are short-term and perhaps, should be avoided.
  • Google ranks websites (relevancy aside for a moment) by the number and quality of incoming links to a site from other websites (amongst hundreds of other metrics). Generally speaking, a link from a page to another page is viewed in Google “eyes” as a vote for that page the link points to. The more votes a page gets, the more trusted a page can become, and the higher Google will rank it – in theory. Rankings are HUGELY affected by how much Google ultimately trusts the DOMAIN the page is on. BACKLINKS (links from other websites – trump every other signal.)
  • I’ve always thought if you are serious about ranking – do so with ORIGINAL COPY. It’s clear – search engines reward good content it hasn’t found before. It indexes it blisteringly fast, for a start (within a second, if your website isn’t penalised!). So – make sure each of your pages has enough text content you have written specifically for that page – and you won’t need to jump through hoops to get it ranking.
  • If you have original, quality content on a site, you also have a chance of generating inbound quality links (IBL). If your content is found on other websites, you will find it hard to get links, and it probably will not rank very well as Google favours diversity in its results. If you have original content of sufficient quality on your site, you can then let authority websites – those with online business authority – know about it, and they might link to you – this is called a quality backlink.
  • Search engines need to understand that ‘a link is a link’ that can be trusted. Links can be designed to be ignored by search engines with the rel nofollow attribute.
  • Search engines can also find your site by other websites linking to it. You can also submit your site to search engines directly, but I haven’t submitted any site to a search engine in the last ten years – you probably don’t need to do that. If you have a new site, I would immediately register it with Google Webmaster Tools these days.
  • Google and Bing use a crawler (Googlebot and Bingbot) that spiders the web looking for new links to find. These bots might find a link to your homepage somewhere on the web and then crawl and index the pages of your site if all your pages are linked together. If your website has an XML sitemap, for instance, Google will use that to include that content in its index. An XML sitemap is INCLUSIVE, not EXCLUSIVE.  Google will crawl and index every single page on your site – even pages out with an XML sitemap.
  • Many think that Google won’t allow new websites to rank well for competitive terms until the web address “ages” and acquires “trust” in Google – I think this depends on the quality of the incoming links. Sometimes your site will rank high for a while then disappears for months. A “honeymoon period” to give you a taste of Google traffic, perhaps, or a period to better gauge your website quality from an actual user perspective.
  • Google WILL classify your site when it crawls and indexes your site – and this classification can have a DRASTIC effect on your rankings. It’s important for Google to work out WHAT YOUR ULTIMATE INTENT IS – do you want to be classified as a thin affiliate site made ‘just for Google’, a domain holding page or a small business website with a real purpose? Ensure you don’t confuse Google in any way by being explicit with all the signals you can – to show on your website you are a real business, and your INTENT is genuine – and even more important today – FOCUSED ON SATISFYING A VISITOR.
  • NOTE – If a page exists only to make money from Google’s free traffic – Google calls this spam. I go into this more, later in this guide.
  • The transparency you provide on your website in text and links about who you are, what you do, and how you’re rated on the web or as a business is one way that Google could use (algorithmically and manually) to ‘rate’ your website. Note that Google has a HUGE army of quality raters and at some point they will be on your site if you get a lot of traffic from Google.
  • To rank for specific keyword phrase searches, you usually need to have the keyword phrase or highly relevant words on your page (not necessarily all together, but it helps) or in links pointing to your page/site.
  • Ultimately what you need to do to compete is largely dependent on what the competition for the term you are targeting is doing. You’ll need to at least mirror how hard they are competing if a better opportunity is hard to spot.
  • As a result of other quality sites linking to your site, the site now has a certain amount of real PageRank that is shared with all the internal pages that make up your website that will in future help provide a signal to where this page ranks in the future.
  • Yes, you need to build links to your site to acquire more PageRank, or Google ‘juice’ – or what we now call domain authority or trust. Google is a link-based search engine – it does not quite understand ‘good’ or ‘quality’ content – but it does understand ‘popular’ content. It can also usually identify poor, or THIN CONTENT – and it penalises your site for that – or – at least – it takes away the traffic you once had with an algorithm change. Google doesn’t like calling actions the take a ‘penalty’ – it doesn’t look good. They blame your ranking drops on their engineers getting better at identifying quality content or links, or the inverse – low-quality content and unnatural links. If they do take action your site for paid links – they call this a ‘Manual Action’ and you will get notified about it in Webmaster Tools if you sign up.
  • Link building is not JUST a numbers game, though. One link from a “trusted authority” site in Google could be all you need to rank high in your niche. Of course, the more “trusted” links you attract, the more Google will trust your site. It is evident you need MULTIPLE trusted links from MULTIPLE trusted websites to get the most from Google in 2018.
  • Try and get links within page text pointing to your site with relevant, or at least, natural looking, keywords in the text link – not, for instance, in blogrolls or site-wide links. Try to ensure the links are not obviously “machine generated” e.g. site-wide links on forums or directories. Get links from pages, that in turn, have a lot of links to them, and you will soon see benefits.
  • Onsite, consider linking to your other pages by linking to pages within main content text. I usually only do this when it is relevant – often, I’ll link to relevant pages when the keyword is in the title elements of both pages. I don’t go in for auto-generating links at all. Google has penalised sites for using particular auto link plugins, for instance, so I avoid them.
  • Linking to a page with actual key-phrases in the link help a great deal in all search engines when you want to feature for specific key terms. For example; “SEO Scotland” as opposed to or “click here“. Saying that – in 2018, Google is punishing manipulative anchor text very aggressively, so be sensible – and stick to brand mentions and plain URL links that build authority with less risk. I rarely ever optimise for grammatically incorrect terms these days (especially with links).
  • I think the anchor text links in internal navigation is still valuable – but keep it natural. Google needs links to find and help categorise your pages. Don’t underestimate the value of a clever internal link keyword-rich architecture and be sure to understand for instance how many words Google counts in a link, but don’t overdo it. Too many links on a page could be seen as a poor user experience. Avoid lots of hidden links in your template navigation.
  • Search engines like Google ‘spider’ or ‘crawl’ your entire site by following all the links on your site to new pages, much as a human would click on the links to your pages. Google will crawl and index your pages, and within a few days usually, begin to return your pages in SERPs.
  • After a while, Google will know about your pages, and keep the ones it deems ‘useful’ – pages with original content, or pages with a lot of links to them. The rest will be de-indexed. Be careful – too many low-quality pages on your site will impact your overall site performance in Google. Google is on record talking about good and bad ratios of quality content to low-quality content.
  • Ideally, you will have unique pages, with unique page titles and unique page meta descriptions . Google does not seem to use the meta description when ranking your page for specific keyword searches if not relevant and unless you are careful if you might end up just giving spammers free original text for their site and not yours once they scrape your descriptions and put the text in main content on their site. I don’t worry about meta keywords these days as Google and Bing say they either ignore them or use them as spam signals.
  • Google will take some time to analyse your entire site, examining text content and links. This process is taking longer and longer these days but is ultimately determined by your domain reputation and real PageRank.
  • If you have a lot of duplicate low-quality text already found by Googlebot on other websites it knows about; Google will ignore your page. If your site or page has spammy signals, Google will penalise it, sooner or later. If you have lots of these pages on your site – Google will ignore most of your website.
  • You don’t need to keyword stuff your text to beat the competition.
  • You optimise a page for more traffic by increasing the frequency of the desired key phrase, related key terms, co-occurring keywords and synonyms in links, page titles and text content. There is no ideal amount of text – no magic keyword density. Keyword stuffing is a tricky business, too, these days.
  • I prefer to make sure I have as many UNIQUE relevant words on the page that make up as many relevant long tail queries as possible.
  • If you link out to irrelevant sites, Google may ignore the page, too – but again, it depends on the site in question. Who you link to, or HOW you link to, REALLY DOES MATTER – I expect Google to use your linking practices as a potential means by which to classify your site. Affiliate sites, for example, don’t do well in Google these days without some good quality backlinks and higher quality pages.
  • Many search engine marketers think who you link out to (and who links to you) helps determine a topical community of sites in any field or a hub of authority. Quite simply, you want to be in that hub, at the centre if possible (however unlikely), but at least in it. I like to think of this one as a good thing to remember in the future as search engines get even better at determining topical relevancy of pages, but I have never actually seen any granular ranking benefit (for the page in question) from linking out.
  • I’ve got by, by thinking external links to other sites should probably be on single pages deeper in your site architecture, with the pages receiving all your Google Juice once it’s been “soaked up” by the higher pages in your site structure (the home page, your category pages). This tactic is old school but I still follow it. I don’t need to think you need to worry about that, too much, in 2018.
  • Original content is king and will attract a “natural link growth” – in Google’s opinion. Too many incoming links too fast might devalue your site, but again. I usually err on the safe side – I always aimed for massive diversity in my links – to make them look ‘more natural’. Honestly, I go for natural links in 2018 full stop, for this website.
  • Google can devalue whole sites, individual pages, template generated links and individual links if Google deems them “unnecessary” and a ‘poor user experience’.
  • Google knows who links to you, the “quality” of those links, and whom you link to. These – and other factors – help ultimately determine where a page on your site ranks. To make it more confusing – the page that ranks on your site might not be the page you want to rank, or even the page that determines your rankings for this term. Once Google has worked out your domain authority – sometimes it seems that the most relevant page on your site Google HAS NO ISSUE with will rank.
  • Google decides which pages on your site are important or most relevant. You can help Google by linking to your important pages and ensuring at least one page is well optimised amongst the rest of your pages for your desired key phrase. Always remember Google does not want to rank ‘thin’ pages in results – any page you want to rank – should have all the things Google is looking for. That’s a lot these days!
  • It is important you spread all that real ‘PageRank’ – or link equity – to your sales keyword / phrase rich sales pages, and as much remains to the rest of the site pages, so Google does not ‘demote’ pages into oblivion –  or ‘supplemental results’ as we old timers knew them back in the day. Again – this is slightly old school – but it gets me by, even today.
  • Consider linking to important pages on your site from your home page, and other important pages on your site.
  • Focus on RELEVANCE first. Then, focus your marketing efforts and get REPUTABLE. This is the key to ranking ‘legitimately’ in Google in 2018.
  • Every few months Google changes its algorithm to punish sloppy optimisation or industrial manipulation. Google Panda and Google Penguin are two such updates, but the important thing is to understand Google changes its algorithms constantly to control its listings pages (over 600 changes a year we are told).
  • The art of rank modification is to rank without tripping these algorithms or getting flagged by a human reviewer – and that is tricky!
  • Focus on improving website download speeds at all times. The web is changing very fast, and a fast website is a good user experience.

Welcome to the tightrope that is modern web optimisation.

If you are a geek and would like to learn more see my post on potential Google ranking factors.

Read on if you would like to learn how to SEO….

Keyword Research

The first step in any professional campaign is to do some keyword research and analysis.

Screen Shot 2013-07-19 at 02.13.14

Somebody asked me about this a simple white hat tactic and I think what is probably the simplest thing anyone can do that guarantees results.

The chart above (from last year) illustrates a reasonably valuable 4-word term I noticed a page I had didn’t rank high in Google for, but I thought probably should and could rank for, with this simple technique.

I thought it as simple as an example to illustrate an aspect of onpage SEO or ‘rank modification’, that’s white hat, 100% Google friendly and never, ever going to cause you a problem with Google.

This ‘trick’ works with any keyword phrase, on any site, with obvious differing results based on the availability of competing pages in SERPs, and availability of content on your site.

The keyword phrase I am testing rankings for isn’t ON the page, and I did NOT add the key phrase…. or in incoming links, or using any technical tricks like redirects or any hidden technique, but as you can see from the chart, rankings seem to be going in the right direction.

You can profit from it if you know a little about how Google works (or seems to work, in many observations, over years, excluding when Google throws you a bone on synonyms. You can’t ever be 100% certain you know how Google works on any level, unless it’s data showing you’re wrong, of course.)

What did I do to rank number 1 from nowhere for that key phrase?

I added one keyword to the page in plain text because adding the actual ‘keyword phrase’ itself would have made my text read a bit keyword stuffed for other variations of the main term. It gets interesting if you do that to a lot of pages, and a lot of keyword phrases. The important thing is keyword research – and knowing which unique keywords to add.

This example illustrates a key to ‘relevance’ on a page, in a lot of instances, is a keyword.

The precise keyword.

Yes – plenty of other things can be happening at the same time. It’s hard to identify EXACTLY why Google ranks pages all the time…but you can COUNT on other things happening and just get on with what you can see works for you.

In a time of light optimisation, it’s useful to EARN a few terms you SHOULD rank for in simple ways that leave others wondering how you got it.

Of course, you can still keyword stuff a page, or still spam your link profile – but it is ‘light’ optimisation I am genuinely interested in testing on this site – how to get more with less – I think that’s the key to not tripping Google’s aggressive algorithms.

There are many tools on the web to help with basic keyword research (including the Google Keyword Planner tool and there are even more useful third party SEO tools to help you do this).

You can use many keyword research tools to identify quickly opportunities to get more traffic to a page.

I built my own:

Screenshot 2015-11-17 14.35.12

Google Analytics Keyword ‘Not Provided.’

Google Analytics was the very best place to look at keyword opportunity for some (especially older) sites, but that all changed a few years back.

Google stopped telling us which keywords are sending traffic to our sites from the search engine back in October 2011, as part of privacy concerns for its users.

QUOTE: “Google will now begin encrypting searches that people do by default, if they are logged into already through a secure connection. The change to SSL search also means that sites people visit after clicking on results at Google will no longer receive “referrer” data that reveals what those people searched for, except in the case of ads.

Google Analytics now instead displays – keyword “not provided“, instead.

QUOTE: “In Google’s new system, referrer data will be blocked. This means site owners will begin to lose valuable data that they depend on, to understand how their sites are found through Google. They’ll still be able to tell that someone came from a Google search. They won’t, however, know what that search was.” SearchEngineLand

You can still get some of this data if you sign up for Google Webmaster Tools (and you can combine this in Google Analytics) but the data even there is limited and often not entirely the most accurate. The keyword data can be useful, though – and access to backlink data is essential these days.

If the website you are working on is an aged site – there’s probably a wealth of keyword data in Google Analytics.

My favourite keyword research tool is SEMRush.

More Reading:

Do Keywords In Bold Or Italic Help?

Some webmasters claim putting your keywords in bold or putting your keywords in italics is a beneficial ranking factor in terms of search engine optimizing a page.

It is essentially impossible to test this, and I think these days, Google could well be using this (and other easy to identify on page optimisation efforts) to determine what to punish a site for, not promote it in SERPs.

Any item you can ‘optimise’ on your page – Google can use this against you to filter you out of results.

I use bold or italics these days specifically for users.

I only use emphasis if it’s natural or this is really what I want to emphasise!

Do not tell Google what to filter you for that easily.

I think Google treats websites they trust far different to others in some respect.

That is, more trusted sites might get treated differently than untrusted sites.

Keep it simple, natural, useful and random.

QUOTE: “You’ll probably get more out of bolding text for human users / usability in the end. Bots might like, but they’re not going to buy anything.” John Mueller, Google 2017

How Many Words & Keywords Do I Use On A Page?

How much text do you put on a page to rank for a certain keyword?

The answer is there is no optimal amount of text per page, but how much text you’ll ‘need’ will be based on your DOMAIN AUTHORITY, your TOPICAL RELEVANCE and how much COMPETITION there is for that term, and HOW COMPETITIVE that competition is.

Instead of thinking about the quantity of the text, you should think more about the quality of the content on the page. Optimise this with searcher intent in mind. Well, that’s how I do it.

I don’t find that you need a minimum amount of words or text to rank in Google. I have seen pages with 50 words outrank pages with 100, 250, 500 or 1000 words. Then again I have seen pages with no text rank on nothing but inbound links or other ‘strategy’. In 2018, Google is a lot better at hiding away those pages, though.

At the moment, I prefer long form pages with a lot of text although I still rely heavily on keyword analysis to make my pages. The benefits of longer pages are that they are great for long tail key phrases.

Creating deep, information rich pages focuses the mind when it comes to producing authoritative, useful content.

Every site is different. Some pages, for example, can get away with 50 words because of a good link profile and the domain it is hosted on. For me, the important thing is to make a page relevant to a user’s search query.

I don’t care how many words I achieve this with and often I need to experiment on a site I am unfamiliar with. After a while, you get an idea how much text you need to use to get a page on a certain domain into Google.

One thing to note – the more text you add to the page, as long as it is unique, keyword rich and relevant, the more that page will be rewarded with more visitors from Google.

There is no optimal number of words on a page for placement in Google. Every website – every page – is different from what I can see. Don’t worry too much about word count if your content is original and informative. Google will probably reward you on some level – at some point – if there is lots of unique text on all your pages.

QUOTE: “I don’t know specifically about the scorecard, how that’s set up, or how the Google News guidelines are in general.  But in practice from a web search point of view, we don’t have limits on the number of words you can have on a page.” John Mueller, Google 2015

What is The Perfect Keyword Density?

The short answer to this is – there is no perfect keyword density.

There is no one-size-fits-all keyword density, no optimal percentage guaranteed to rank any page at number 1. However, I do know you can keyword stuff a page and trip a spam filter.

Most web optimisation professionals agree there is no ideal percent of keywords in text to get a page to number 1 in Google. Search engines are not that easy to fool, although the key to success in many fields doing simple things well (or, at least, better than the competition).

I write natural page copy where possible always focused on the key terms – I never calculate density to identify the best % – there are way too many other things to work on. I have looked into this. If it looks natural, it’s ok with me.

I aim to include related terms, long-tail variants and synonyms in Primary Content – at least ONCE, as that is all some pages need.

QUOTE:  keyword density, in general, is something I wouldn’t focus on. Search engines have kind of moved on from there.” John Mueller, Google 2014

Optimal keyword density is a myth, although there are many who would argue otherwise.

Keyword Stuffing

Keyword stuffing is simply the process of repeating the same keyword or key phrases over and over in a page. It’s counterproductive. It’s is a signpost of a very low-quality spam site and is something Google clearly recommends you avoid.

It is:

QUOTE: “the practice of loading a webpage with keywords in an attempt to manipulate a site’s ranking in Google’s search results“. Google

Keyword stuffing text makes your copy often unreadable and so, a bad user experience. It often gets a page booted out of Google but it depends on the intent and the trust and authority of a site. It’s sloppy SEO in 2018.

It is not a tactic you want to employ in search of long-term rankings.

QUOTE: “Keyword Stuffed” Main Content Pages may be created to lure search engines and users by repeating keywords over and over again, sometimes in unnatural and unhelpful ways. Such pages are created using words likely to be contained in queries issued by users. Keyword stuffing can range from mildly annoying to users, to complete gibberish. Pages created with the intent of luring search engines and users, rather than providing meaningful MC to help users, should be rated Lowest.” Google Search Quality Evaluator Guidelines 2017

Just because someone else is successfully doing it do not automatically think you will get away with it.

Don’t do it – there are better ways of ranking in Google without resorting to it.

 John said in a 2015 hangout “if we see that things like keyword stuffing are happening on a page, then we’ll try to ignore that, and just focus on the rest of the page”.

Does that imply what we call a keyword stuffing “penalty” for a page, Google calls ‘ignoring that‘. From what I’ve observed, pages can seem to perform bad for sloppy keyword phrase stuffing, although they still can rank for long tail variations of it.

QUOTE“The bottom line is using more relevant keyword variations = more traffic”. Aaron Wall

He goes further with an excellent piece of advice even in 2018:

QUOTE:  Each piece of duplication in your on-page SEO strategy is ***at best*** wasted opportunity. Worse yet, if you are aggressive with aligning your on page heading, your page title, and your internal + external link anchor text the page becomes more likely to get filtered out of the search results (which is quite common in some aggressive spaces). Aaron Wall, 2009

Focus On The User

As Google says in their manifesto:

QUOTE: “Focus on the user and all else will follow.” Google

It is time to focus on the user when it comes to content marketing, and the bottom line is you need to publish unique content free from any low-quality signals if expect some sort of traction in Google SERPs (Search Engine Result Pages).

QUOTE: “high quality content is something I’d focus on. I see lots and lots of SEO blogs talking about user experience, which I think is a great thing to focus on as well. Because that essentially kind of focuses on what we are trying to look at as well. We want to rank content that is useful for (Google users) and if your content is really useful for them, then we want to rank it.” John Mueller, Google 2016

Those in every organisation with the responsibility of adding content to a website should understand these fundamental aspects about satisfying web content because the door is firmly closing on unsatisfying web content.

Low-quality content can severely impact the success of SEO, in 2018.

SEO copywriting is a bit of a dirty word – but the text on a page still requires optimised, using familiar methods, albeit published in a markedly different way than we, as SEO, used to get away with.

Optimise For User Intent & Satisfaction

Graph: Traffic performance of a page in Google

QUOTE: “Basically you want to create high-quality sites following our webmaster guidelines, and focus on the user, try to answer the user, try to satisfy the user, and all eyes will follow.” Gary Illyes, Google 2016

When it comes to writing SEO-friendly text for Google, we must optimise for user intent, not simply what a user typed into Google.

Google will send people looking for information on a topic to the highest quality, relevant pages it knows about, often BEFORE it relies on how Google ‘used‘ to work e.g. relying on finding near or exact match instances of a keyword phrase on any one page, regardless of the actual ‘quality’ of that page.

Google is constantly evolving to better understand the context and intent of user behaviour, and it doesn’t mind rewriting the query used to serve high-quality pages to users that more comprehensively deliver on user satisfaction e.g. explore topics and concepts in a unique and satisfying way.

Of course, optimising for user intent, even in this fashion, is something a lot of marketers had been doing long before query rewriting and  Google Hummingbird came along.

Focus on ‘Things’, Not ‘Strings’

QUOTE: “we’ve been working on an intelligent model—in geek-speak, a “graph”—that understands real-world entities and their relationships to one another: things, not strings” Google 2012

Google is better at working out what a page is about, and what it should be about to satisfy the intent of a searcher, and it isn’t relying only on keyword phrases on a page to do that anymore.

Google has a Knowledge Graph populated with NAMED ENTITIES and in certain circumstances, Google relies on such information to create SERPs.

Google has plenty of options when rewriting the query in a contextual way, based on what you searched for previously, who you are, how you searched and where you are at the time of the search.

Can I Just Write Naturally and Rank High in Google?

Yes, you must write naturally (and succinctly) in 2018, but if you have no idea the keywords you are targeting, and no expertise in the topic, you will be left behind those that can access this experience.

You can just ‘write naturally’ and still rank, albeit for fewer keywords than you would have if you optimised the page.

There are too many competing pages targeting the top spots not to optimise your content.

Naturally, how much text you need to write, how much you need to work into it, and where you ultimately rank, is going to depend on the domain reputation of the site you are publishing the article on.

Do You Need Lots of Text To Rank Pages In Google?

User search intent is a way marketers describe what a user wants to accomplish when they perform a Google search.

SEOs have understood user search intent to fall broadly into the following categories and there is an excellent post on Moz about this.

  1. Transactional – The user wants to do something like buy, signup, register to complete a task they have in mind.
  2. Informational – The user wishes to learn something
  3. Navigational – The user knows where they are going

The Google human quality rater guidelines modify these to simpler constructs:

  • Do 
  • Know
  • Go

As long as you meet the user’s primary intent, you can do this with as few words as it takes to do so.

You do NOT need lots of text to rank in Google.

Optimising For ‘The Long Click’

When it comes to rating user satisfaction, there are a few theories doing the rounds at the moment that I think are sensible. Google could be tracking user satisfaction by proxy. When a user uses Google to search for something, user behaviour from that point on can be a proxy of the relevance and relative quality of the actual SERP.

What is a Long Click?

A user clicks a result and spends time on it, sometimes terminating the search.

What is a Short Click?

A user clicks a result and bounces back to the SERP, pogo-sticking between other results until a long click is observed. Google has this information if it wants to use it as a proxy for query satisfaction.

For more on this, I recommend this article on the time to long click.

Optimise Supplementary Content on the Page

Once you have the content, you need to think about supplementary content and secondary links that help users on their journey of discovery.

That content CAN be on links to your own content on other pages, but if you are really helping a user understand a topic – you should be LINKING OUT to other helpful resources e.g. other websites.A website that does not link out to ANY other website could be interpreted accurately to be at least, self-serving. I can’t think of a website that is the true end-point of the web.

A website that does not link out to ANY other website could be interpreted accurately to be at least, self-serving. I can’t think of a website that is the true end-point of the web.

  • TASK – On informational pages, LINK OUT to related pages on other sites AND on other pages on your own website where RELEVANT
  • TASK – For e-commerce pages, ADD RELATED PRODUCTS.
  • TASK – Create In-depth Content Pieces
  • TASK – Keep Content Up to Date, Minimise Ads, Maximise Conversion, Monitor For broken, or redirected links
  • TASK – Assign in-depth content to an author with some online authority, or someone with displayable expertise on the subject
  • TASK – If running a blog, first, clean it up. To avoid creating pages that might be considered thin content in 6 months, consider planning a wider content strategy. If you publish 30 ‘thinner’ pages about various aspects of a topic, you can then fold all this together in a single topic page centred page helping a user to understand something related to what you sell.

User Generated Content & Forum SEO Advice

It’s evident that Google wants forum administrators to work harder on managing user generated content Googlebot ‘rates’ as part of your site.

In a 2015 hangout John Mueller said  to “noindex untrusted post content” and going on says  posts by new posters who haven’t been in the forum before. threads that don’t have any answers. Maybe they’re noindexed by default.

A very interesting statement was “how much quality content do you have compared to low-quality content“. That indicates google is looking at this ratio. John says to identify “which pages are high-quality, which pages are lower quality, so that the pages that do get indexed are really the high-quality ones.

John mentions to look too at “threads that don’t have any authoritative answers“.

I think that advice is relevant for any site with lots of content.

For more on primary main content optimisation see::

Page Title Element

<title>What Is The Best Title Tag For Google?</title>

The page title tag (or HTML Title Element) is arguably the most important on page ranking element (with regards to web page optimisation). Keywords in page titles can undeniably HELP your pages rank higher in Google results pages (SERPs).

The page title is also often used by Google as the title of a search snippet link in search engine results pages.

QUOTE: “We do use it for ranking, but it’s not the most critical part of a page. So it’s not worthwhile filling it with keywords to hope that it works that way. In general, we try to recognise when a title tag is stuffed with keywords because that’s also a bad user experience for users in the search results. If they’re looking to understand what these pages are about and they just see a jumble of keywords, then that doesn’t really help.” John Mueller, Google 2016

For me, a perfect title tag in Google is dependant on a number of factors and I will lay down a couple below but I have since expanded page title advice on another page (link below);

  1. A page title that is highly relevant to the page it refers to will maximise usability, search engine ranking performance and user experience ratings as Google measures these. It will probably be displayed in a web browser’s window title bar, bookmarks and in clickable search snippet links used by Google, Bing & other search engines. The title element is the “crown” of a web page with important keyword phrase featuring AT LEAST ONCE within it.
  2. Most modern search engines have traditionally placed a lot of importance in the words contained within this HTML element. A good page title is made up of keyword phrases of value and high search volumes.
  3. The last time I looked Google displayed as many characters as it can fit into a block element that’s about 600px wide and doesn’t exceed 1 line of text (on desktop). So – THERE IS NO BEST PRACTICE AMOUNT OF CHARACTERS any SEO could lay down as exact best practice to GUARANTEE a title will display, in full in Google, at least, as the search snippet title, on every device. Ultimately – only the characters and words you use will determine if your entire page title will be seen in a Google search snippet.
  4. If you want to *ENSURE* your FULL title tag shows in the desktop UK version of Google SERPs, stick to a shorter title of between 55-65 characters but that does not mean your title tag MUST end at 55 characters and remember your mobile visitors see a longer title (in the UK, in January 2018). What you see displayed in SERPs depends on the characters you use. In 2018 – I just expect what Google displays to change – so I don’t obsess about what Google is doing in terms of display. See the tests later on in this article.
  5. Google is all about ‘user experience’ and ‘visitor satisfaction’ in 2018 so it’s worth remembering that usability studies have shown that a good page title length is about seven or eight words long and fewer than 64 total characters. Longer titles are less scan-able in bookmark lists, and might not display correctly in many browsers (and of course probably will be truncated in SERPs).
  6. Google will INDEX perhaps 1000s of characters in a title… but I don’t think anyone knows exactly how many characters or words Google will count AS a TITLE TAG when determining RELEVANCE OF A DOCUMENT for ranking purposes. It is a very hard thing to try to isolate accurately with all the testing and obfuscation Google uses to hide it’s ‘secret sauce’. I have had ranking success with longer titles – much longer titles. Google certainly reads ALL the words in your page title (unless you are spamming it silly, of course).
  7. You can probably include up to 12 words that will be counted as part of a page title, and consider using your important keywords in the first 8 words. The rest of your page title will be counted as normal text on the page.
  8. NOTE, in 2018, the HTML title element you choose for your page, may not be what Google chooses to include in your SERP snippet. The search snippet title and description is very much QUERY & DEVICE dependant these days. Google often chooses what it thinks is the most relevant title for your search snippet, and it can use information from your page, or in links to that page, to create a very different SERP snippet title.
  9. When optimising a title, you are looking to rank for as many terms as possible, without keyword stuffing your title. Often, the best bet is to optimise for a particular phrase (or phrases) – and take a more long-tail approach. Note that too many page titles and not enough actual page text per page could lead to doorway page type situations. A highly relevant unique page title is no longer enough to float a page with thin content. Google cares WAY too much about the page text content these days to let a good title hold up a thin page on most sites.
  10. Some page titles do better with a call to action – a call to action which reflects exactly a searcher’s intent (e.g. to learn something, or buy something, or hire something. THINK CAREFULLY before auto-generating keyword phrase footprints across a site using boiler-plating and article spinning techniques. Remember this is your hook in search engines, if Google chooses to use your page title in its search snippet, and there is a lot of competing pages out there in 2018.
  11. The perfect title tag on a page is unique to other pages on the site. You REALLY need to make your page titles UNIQUE (ESPECIALLY RELEVANT TO OTHER PAGES ON YOUR SITE), and minimise any duplication, especially on larger sites. Duplicate page titles across a site are often a sign of poor indexation control or doorway type pages.
  12. I like to make sure my keywords feature as early as possible in a title tag but the important thing is to have important keywords and key phrases in your page title tag SOMEWHERE.
  13. For me, when SEO is more important than branding, the company name goes at the end of the tag, and I use a variety of dividers to separate as no one way performs best. If you have a recognisable brand – then there is an argument for putting this at the front of titles – although Google often will change your title dynamically – sometimes putting your brand at the front of your snippet link title itself. I often leave out branding. There is no one size fits all approach as the strategy will depend on the type of page you are working with.
  14. Note that Google is pretty good these days at removing any special characters you have in your page title – and I would be wary of trying to make your title or Meta Description STAND OUT using special characters. That is not what Google wants, evidently, and they do give you a further chance to make your search snippet stand out with RICH SNIPPETS and SCHEMA mark-up.
  15. I like to think I write titles for search engines AND humans.
  16. Know that Google tweaks everything regularly – why not what the perfect title keys off? So MIX it up…
  17. Don’t obsess. Natural is probably better, and will only get better as engines evolve. I optimise for key-phrases, rather than just keywords.
  18. I prefer mixed case page titles as I find them more scan-able than titles with ALL CAPS or all lowercase.
  19. Historically the more domain trust, authority or relevance about a topic your SITE has on Google, the easier it is for a new page to rank for something. So bear that in mind. There is only so much you can do with your page titles – your websites rankings in Google are a LOT more to do with OFFSITE factors than ONSITE ones – negative and positive.
  20. Click satisfaction (whatever that is) is something that is likely measured by Google when ranking pages (Bing say they use it too), so it is really worth considering whether you are best optimising your page titles for user click-through satisfaction or optimising for more search engine rankings (the latter being risky in 2018).
  21. I would think keyword stuffing your page titles could be one area that Google could look at.
  22. Remember….think ‘keyword phrase‘ rather than ‘keyword‘, ‘keyword‘ ,’keyword‘ . Use your page title text to target a less competitive long tail search term, especially if you are up against stiff competition.
  23. Google will select the best title it wants for your search snippet – and it will take that information from multiple sources, NOT just your page title element. A small title is often appended with more information about the domain. Sometimes, if Google is confident in the BRAND name, it will replace it with that (often adding it to the beginning of your title with a colon, or sometimes appending the end of your snippet title with the actual domain address the page belongs to).
  24. Beware of repeating keywords unnecessarily, keyword stuffing or using boilerplate text to create your titles. Any duplication that is perceived by Googlebot as manipulation is easily down-ranked by algorithms.
  25. If your content suits it (e.g. you have evergreen content on your site), I find success adding the year to some titles help boost searches at the turn of the year. You need to be on top of this strategy year to year though, or you risk making your content OUTDATED (the opposite of what you want).

A Note About Title Tags;

When you write a page title, you have a chance right at the beginning of the page to tell Google (and other search engines) if this is a spam site or a quality site – such as – have you repeated the keyword four times or only once? I think title tags, like everything else, should probably be as simple as possible, with the keyword once and perhaps a related term if possible.

I always aim to keep my HTML page title elements simple and as unique as possible.

I’m certainly cleaning up the way I write my titles all the time.

More Reading:

Meta Keywords Tag


A hallmark of shady natural search engine optimisation companies – the meta-keywords tag is not used by Google to rank pages. Companies that waste time and resources on these items (and others like them) waste client’s money:

<meta name="Keywords" content="s.e.o., search engine optimisation, optimization">

I have one piece of advice with the meta keyword tag, which like the title tag, goes in the head section of your web page, forget about them.

Even Google says so:

QUOTE: “a poor SEO who might otherwise convince you to do useless things like add more words to the keyword meta tag” Maile Ohye, Google 2017

If you are relying on meta-keyword optimisation to rank for terms, your dead in the water. From what I see, Google + Bing ignores meta keywords – or, at least, places no weight in them to rank pages. Yahoo may read them, but really, a search engine optimiser has more important things to worry about than this nonsense.

What about other search engines that use them? Hang on while I submit my site to those 75,000 engines first [sarcasm!]. Yes, ten years ago early search engines liked looking at your meta-keywords. I’ve seen OPs in forums ponder which is the best way to write these tags – with commas, with spaces, limiting to how many characters. Forget about meta-keyword tags – they are a pointless waste of time and bandwidth.

Tin Foil Hat Time

So you have a new site. You fill your home page meta tags with the 20 keywords you want to rank for – hey, that’s what optimisation is all about, isn’t it? You’ve just told Google by the third line of text what to filter you for. The meta name=”Keywords” was actually originally for words that weren’t actually on the page that would help classify the document.

Sometimes competitors might use the information in your keywords to determine what you are trying to rank for, too….

I ignore meta keywords and often remove them from pages I work on.

Meta Description Tag

Like the title element and unlike the meta keywords tag, this one is important, both from a human and search engine perspective.

<meta name="Description" content="Get your site on the first page of Google,
Yahoo and Bing. Call us on 0800 689 0293. A company based in Scotland." />

Forget whether or not to put your keyword in it, make it relevant to a searcher and write it for humans, not search engines. If you want to have this 20-word meta description which accurately describes the page you have optimised (for one or two keyword phrases) when people use Google to search, make sure the keyword is in there. There is more likelihood it will be used by Google as your snippet description in SERPs.

QUOTE: “However it can affect the way that users see your site in the search results and whether or not they actually click through to your site. So that’s kind of one one aspect there to keep in mind.”  John Meuller 2017

Google looks at the description but it probably does not use the description tag to rank pages in a very noticeable way. I certainly don’t know of an example that clearly shows a meta description helping a page rank on its own.

QUOTE: “it’s not the case that changing your descriptions or making them longer or shorter or tweaking them or putting keywords in there will affect your site’s ranking.” John Mueller 2017

Sometimes, I will ask a question with my titles, and answer it in the description, sometimes I will just give a hint.

That is a lot more difficult in 2018 as search snippets change depending on what Google wants to emphasise to its users.

It’s also important to have unique meta descriptions on every page on your site if you think it could help clickthrough rates.

Tin Foil Hat Time

Sometimes I think if your titles are spammy, your keywords are spammy, and your meta description is spammy, Google might stop right there – even they probably will want to save bandwidth at some time. Putting a keyword in the description won’t take a crap site to number 1 or raise you 50 spots in a competitive niche – so why optimise for a search engine when you can optimise for a human? – I think that is much more valuable, especially if you are in the mix already – that is – on page one for your keyword.

So, the meta description tag is important in Google, Yahoo and Bing and every other engine listing – very important to get it right.

Make it for humans.

Oh, and by the way – Google seems to truncate anything over @156 characters in the meta description, although this may be limited by pixel width in 2018.

You Are Allowed to Programmatically Generate Meta Descriptions on Large Sites

Googles says you can programmatically auto-generate unique meta descriptions based on the content of the page.

Follow Googles example:

<META NAME="Description" CONTENT="Author: J. K. Rowling, Illustrator: Mary GrandPré, Category: Books, Price: $17.99, Length: 784 pages">

….and their advice why to do this:

QUOTE:  No duplication, more information, and everything is clearly tagged and separated. No real additional work is required to generate something of this quality: the price and length are the only new data, and they are already displayed on the site.” Google

I think it is very important to listen when Google tells you to do something in a very specific way, and Google does give clear advice in this area.

More Reading:

Robots Meta Tag

Example Robots Meta Tag;

<meta name="robots" content="index, nofollow" />

I could use the above meta tag to tell Google to index the page but not to follow any links on the page, if for some reason, I did not want the page to appear in Google search results.

By default, Googlebot will index a page and follow links to it. So there’s no need to tag pages with content values of INDEX or FOLLOW. GOOGLE

There are various instructions you can make use of in your Robots Meta Tag, but remember Google by default WILL index and follow links, so you have NO need to include that as a command – you can leave the robots meta out completely – and probably should if you don’t have a clue.

Googlebot understands any combination of lowercase and uppercase. GOOGLE.

Valid values for Robots Meta Tag ”CONTENT” attribute are: “INDEX“, “NOINDEX“, “FOLLOW“, and  “NOFOLLOW“.

Example Usage:


Google will understand the following and interprets the following robots meta tag values:

  • NOINDEX – prevents the page from being included in the index.
  • NOFOLLOW – prevents Googlebot from following any links on the page. (Note that this is different from the link-level NOFOLLOW attribute, which prevents Googlebot from following an individual link.)
  • NOARCHIVE – prevents a cached copy of this page from being available in the search results.
  • NOSNIPPET – prevents a description from appearing below the page in the search results, as well as prevents caching of the page.
  • NOODP – blocks the Open Directory Project description of the page from being used in the description that appears below the page in the search results.
  • NONE – equivalent to “NOINDEX, NOFOLLOW”.

Robots META Tag Quick Reference


I’ve included the robots meta tag in my tutorial as this IS one of only a few meta tags / HTML head elements I focus on when it comes to managing Googlebot and Bingbot. At a page level – it is a powerful way to control if your pages are returned in search results pages.

These meta tags go in the [HEAD] section of a [HTML] page and represent the only tags for Google I care about. Just about everything else you can put in the [HEAD] of your HTML document is quite unnecessary and maybe even pointless (for Google optimisation, anyway).

Robots.txt File

If you want to control which pages get crawled and indexed by Google see my article for beginners to the robots.txt file.

H1-H6: Page Headings

I can’t find any definitive proof online that says you need to use Heading Tags (H1, H2, H3, H4, H5, H6) or that they improve rankings in Google, and I have seen pages do well in Google without them – but I do use them, especially the H1 tag on the page.

For me, it’s another piece of a ‘perfect’ page, in the traditional sense, and I try to build a site for Google and humans.

<h1>This is a page title</h1>

I still generally only use one <h1> heading tag in my keyword targeted pages – I believe this is the way the W3C intended it to be used in HTML4 – and I ensure they are at the top of a page above relevant page text and written with my main keywords or related keyword phrases incorporated.

I have never experienced any problems using CSS to control the appearance of the heading tags making them larger or smaller.

You can use multiple H1s in HTML5, but most sites I find I work on still use HTML4.

I use as many H2 – H6 as is necessary depending on the size of the page, but I use H1, H2 & H3. You can see here how to use header tags properly (basically, just be consistent, whatever you do, to give your users the best user experience).

How many words in the H1 Tag? As many as I think is sensible – as short and snappy as possible usually.

QUOTE: We do use H tags to understand the structure of the text on a page better, John Mueller, Google 2015

I also discovered Google will use your Header tags as page titles at some level if your title element is malformed.

As always be sure to make your heading tags highly relevant to the content on that page and not too spammy, either.

Alt Tags & ALT Text

NOTE: Alt Tags are counted by Google (and Bing), but I would be careful over-optimising them. I’ve seen a lot of websites penalised for over-optimising invisible elements on a page. Don’t do it.

ALT tags are very important and I think a very rewarding area to get right.

QUOTE: “Yes… so the alt text is essentially shown when the images are turned off in most browsers so that’s something that we would count as part of the on-page text.” John Mueller, Google 2017

I used to always put the main keyword in an ALT once when addressing a page, but now I do that only if the image is very relevant too.

Don’t optimise your ALT tags (or rather, attributes) JUST for Google!

Use ALT tags (or rather, ALT Attributes) for descriptive text that helps visitors – and keep them unique where possible, like you do with your titles and meta descriptions.

QUOTE: “adding an alt tag is very easy to do and you should pretty much do it on all of your images. It helps your accessibility and it can help us understand what’s going on in your image.” Google, 2007

Don’t obsess. Don’t optimise your ALT tags just for Google – do it for humans, accessibility and usability. If you are interested, I conducted a simple test using ALT attributes to determine how many words I could use in IMAGE ALT text that Google would pick up.

And remember – even if, like me most days, you can’t be bothered with all the image ALT tags on your page, at least, use a blank ALT (or NULL value) so people with screen readers can enjoy your page.

John Mueller has since commented about about Alt Tags:

QUOTE: “alt attribute should be used to describe the image. So if you have an image of a big blue pineapple chair you should use the alt tag that best describes it, which is alt=”big blue pineapple chair.” title attribute should be used when the image is a hyperlink to a specific page. The title attribute should contain information about what will happen when you click on the image. For example, if the image will get larger, it should read something like, title=”View a larger version of the big blue pineapple chair image.” John Mueller, Google

Barry continues with a quote:

QUOTE: “As the Googlebot does not see [the text in the] the images directly, we generally concentrate on the information provided in the “alt” attribute. Feel free to supplement the “alt” attribute with “title” and other attributes if they provide value to your users! So for example, if you have an image of a puppy (these seem popular at the moment ) playing with a ball, you could use something like “My puppy Betsy playing with a bowling ball” as the alt-attribute for the image. If you also have a link around the image, pointing a large version of the same photo, you could use “View this image in high-resolution” as the title attribute for the link.”

Link Title Attributes, Acronym & ABBR Tags

Does Google Count Text in The Acronym Tag?

From my tests, no. From observing how my test page ranks – Google is ignoring keywords in the acronym tag.

My observations from a test page I observe include;

  • Link Title Attribute – no benefit passed via the link either to another page, it seems
  • ABBR (Abbreviation Tags) – No
  • Image File Name – No
  • Wrapping words (or at least numbers) in SCRIPT – Sometimes. Google is better at understanding what it can render in 2018.

It’s clear many invisible elements of a page are completely ignored by Google (that would interest us SEO).

Some invisible items are (still) aparently supported:

  • NOFRAMES – Yes
  • NOSCRIPT – Yes
  • ALT Attribute – Yes

Unless you really have cause to focus on any particluar invisible element, I think the **P** tag is the most important tag to optimise in 2018.

Keywords In URLs Make Search Engine Friendly URLs

Clean URLs (or search engine friendly URLs) are just that – clean, easy to read, simple.

You do not need clean URLs in site architecture for Google to spider a site successfully (confirmed by Google in 2008), although I do use clean URLs as a default these days, and have done so for years.

It’s often more usable.

Is there a massive difference in Google when you use clean URLs?

No, in my experience it’s very much a second or third order affect, perhaps even less, if used on its own. However – there it is demonstrable benefit to having keywords in URLs.

The thinking is that you might get a boost in Google SERPs if your URLs are clean – because you are using keywords in the actual page name instead of a parameter or session ID number (which Google often struggles with).

I think Google might reward the page some sort of relevance because of the actual file / page name. I optimise as if they do, and when asked about keywords in urls Google did reply:

QUOTEI believe that is a very small ranking factor.” John Mueller, Google 2016

It is virtually impossible to isolate any ranking factor with a degree of certainty.

Where any benefit is slightly detectable is when people (say in forums) link to your site with the URL as the link.

Then it is fair to say you do get a boost because keywords are in the actual anchor text link to your site, and I believe this is the case, but again, that depends on the quality of the page linking to your site. That is, if Google trusts it and it passes Pagerank (!) and anchor text benefit.

And of course, you’ll need citable content on that site of yours.

Sometimes I will remove the stop-words from a URL and leave the important keywords as the page title because a lot of forums garble a URL to shorten it. Most forums will be nofollowed in 2018, to be fair, but some old habits die-hard.

Sometimes I prefer to see the exact phrase I am targeting as the name of the URL I am asking Google to rank.

I configure URLs the following way;

  1. — is automatically changed by the CMS using URL rewrite to
  2. — which I then break down to something like

It should be remembered it is thought although Googlebot can crawl sites with dynamic URLs; it is assumed by many webmasters there is a greater risk that it will give up if the URLs are deemed not important and contain multiple variables and session IDs (theory).

As standard, I use clean URLs where possible on new sites these days, and try to keep the URLs as simple as possible and do not obsess about it.

That’s my aim at all times when I optimise a website to work better in Google – simplicity.

Google does look at keywords in the URL even in a granular level.

Having a keyword in your URL might be the difference between your site ranking and not – potentially useful to take advantage of long tail search queries

More Reading:

Absolute Or Relative URLs

My advice would be to keep it consistent whatever you decide to use.

I prefer absolute URLs. That’s just a preference. Google will crawl either if the local setup is correctly developed.

  • What is an absolute URL? Example –
  • What is a relative URL? Example – /search-engine-optimisation.htm

Relative just means relative to the document the link is on.

Move that page to another site and it won’t work.

With an absolute URL, it would work.

This is entirely going to a choice for your developers. Some developers on very large sites will always prefer relative URLS.

How long is too long for a URL?

QUOTE: “As far as I know, there’s no theoretical length limit, but we recommend keeping URLs shorter than 2000 characters to keep things manageable.” John Mueller, Google 2014

Subdirectories or Files For URL Structure

Sometimes I use subfolders and sometimes I use files. I have not been able to decide if there is any real benefit (in terms of ranking boost) to using either. A lot of CMS these days use subfolders in their file path, so I am pretty confident Google can deal with either.

I used to prefer files like .html when I was building a new site from scratch, as they were the ’end of the line’ for search engines, as I imagined it, and a subfolder (or directory) was a collection of pages.

I used to think it could take more to get a subfolder trusted than say an individual file and I guess this sways me to use files on most websites I created (back in the day). Once subfolders are trusted, it’s 6 or half a dozen, what the actual difference is in terms of ranking in Google – usually, rankings in Google are more determined by how RELEVANT or REPUTABLE a page is to a query.

In the past, subfolders could be treated differently than files (in my experience).

Subfolders can be trusted less than other subfolders or pages in your site, or ignored entirely. Subfolders *used to seem to me* to take a little longer to get indexed by Google, than for instance .html pages.

People talk about trusted domains but they don’t mention (or don’t think) some parts of the domain can be trusted less. Google treats some subfolders….. differently. Well, they used to – and remembering how Google used to handle things has some benefits – even in 2018.

Some say don’t go beyond four levels of folders in your file path. I haven’t experienced too many issues, but you never know.

When asked in a 2015 hangout, Google answered this:

QUOTE: “Is the number of directories within a URL a ranking factorNo.” John Mueller, Google 2015

I think in 2018 it’s even less of something to worry about. There’s so much more important elements to check.

How Google Treats Subdomains: “We… treat that more as a single website”

John Mueller was asked if it is ok to interlink sub-domains, and the answer is yes, usually, because Google will treat subdomains on a website (primarily I think we can presume) as “a single website“:

QUOTE: “So if you have different parts of your website and they’re on different subdomains that’s that’s perfectly fine that’s totally up to you and the way people link across these different subdomains is really up to you I guess one of the tricky aspects there is that we try to figure out what belongs to a website and to treat that more as a single website and sometimes things on separate subdomains are like a single website and sometimes they’re more like separate websites for example on on blogger all of the subdomains are essentially completely separate websites they’re not related to each other on the other hand other websites might have different subdomains and they just use them for different parts of the same thing so maybe for different country versions maybe for different language versions all of that is completely normal.” John Mueller 2017

That is important if Google has a website quality rating score (or multiple scores) for websites.

Personally, as an SEO, I prefer subdirectories rather than subdomains if given the choice, unless it really makes sense to house particular content on a subdomain, rather than the main site (as in the examples John mentions).

Back in the day when Google Panda was released, many tried to run from Google’s content quality algorithms by moving to ‘penalised’ pages and content to subfolders. I thought that was a temporary solution.

Over the long term, you can, I think, expect Google to treat subdomains on most common use websites as one entity – if it is – and be ranked accordingly in terms of content quality and user satisfaction.

Should I Choose a Subfolder or Subdomain?

If you have the choice, I would choose to house content on a subfolder on the main domain. Recent research would still indicate this is the best way to go:

QUOTE: “When you move from Subdomain to Subdirectory, you rank much better, and you get more organic traffic from Google.” Sistrix, 2018

Subfolders or Subdomains for google seo in 2018

Which Is Better For Google? PHP, HTML or ASP?

Google doesn’t care. As long as it renders as a browser compatible document, it appears Google can read it these days.

I prefer PHP these days even with flat documents as it is easier to add server side code to that document if I want to add some sort of function to the site.

Fetch As Google

Check how your site renders to Google using the Fetch & Render Tool that Google provides as part of Google Search Console.

It is important that what Google (Googlebot) sees is (exactly) what a visitor would see if they visit your site. Blocking Google can sometimes result in a real ranking problem for websites.

Google renders your web pages as part of their web ranking process, so ENSURE you do not block any important elements of your website that impact how a user or Googlebot would see your pages.

If Google has problems accessing particular parts of your website, it will tell you in Search Console.

Tip: The fetch and render tool is also a great way to submit your website and new content to Google for indexing so that the page may appear in Google search results.

Check How Google Views Your Desktop Site

Google search console will give you information on this:

Google Fetch and Render Results - Screenshot 2017-12-21 23.30.07


Check How Google Views Your Smartphone Site

Google search console will give you information on this:

Google Fetch and Render Results - Mobile Screenshot 2017-12-21 23.41.31


Check How Your Site Renders On Other Browsers

You will also want to check how the website looks in different versions of Internet Explorer, Chrome, Firefox, Android & Safari Browsers, too – on mobile and desktop.

Site owners and beginners to web design should know from the start that your site will not and cannot look the same in all browsers and operating systems and some visitors on certain devices will run into some sort of difficulty on your site.

Your website CAN’T look the same in ALL of these browsers, but if it looks poor in most of the popular browsers, then you might have a problem.

In fact – Google specifically states in their Webmaster Technical Guidelines that you should:

QUOTE: Test your site to make sure that it appears correctly in different browsers.

When it comes to browser compatibility, Google has 4 main tips:

  • Test your site in as many browsers as possible.

  • Write good, clean HTML.

  • Specify your character encoding.

  • Consider accessibility.

If you are a website designer, you might want to test your web design and see how it looks in different versions of Microsoft Windows Internet Explorer.

You’ll especially want to test how your new web page looks in IE, Firefox, Chrome and Safari across multiple operating systems including Android, Windows for PC and Apple Mac OS, desktop and mobile versions.

Is Valid HTML & CSS A Ranking Factor?

Above – a Google video confirming this advice I first shared in 2008.

QUOTE: “it’s not so much that the code have to be absolutely perfect but whether or not the page is going to render well for the user in general” Danny Sullivan, Google

Does Google rank a page higher because of valid code? The short answer is no, even though I tested it on a small-scale test with different results.

Google doesn’t care if your page is valid HTML and valid CSS. This is clear – check any top ten results in Google and you will probably see that most contain invalid HTML or CSS. I love creating accessible websites but they are a bit of a pain to manage when you have multiple authors or developers on a site.

If your site is so badly designed with a lot of invalid code even Google and browsers cannot read it, then you have a problem.

Where possible, if commissioning a new website, demand, at least, minimum web accessibility compliance on a site (there are three levels of priority to meet), and aim for valid HTML and CSS. Actually, this is the law in some countries although you would not know it, and be prepared to put a bit of work in to keep your rating.

Valid HTML and CSS are a pillar of best practice website optimisation, not strictly a part of professional search engine optimisation. It is one form of optimisation Google will not penalise you for.

I usually still aim to follow W3C recommendations that help deliver a better user experience;

Point Internal Links To Relevant Pages

Link To Important Pages

QUOTE: “We do use internal links to better understand the context of content of your sites” John Mueller, Google 2015

I link to relevant internal pages in my site when necessary.

I silo any relevance or trust mainly via links in text content and secondary menu systems and between pages that are relevant in context to one another.

I don’t worry about perfect silo’ing techniques anymore, and don’t worry about whether or not I should link to one category from another as I think the ‘boost’ many proclaim is minimal on the size of sites I usually manage.

I do not obsess about site architecture as much as I used to…. but I always ensure my pages I want to be indexed are all available from a crawl from the home page – and I still emphasise important pages by linking to them where relevant. I always aim to get THE most important exact match anchor text pointing to the page from internal links – but I avoid abusing internals and avoid overtly manipulative internal links that are not grammatically correct, for instance..

There’s no set method I find works for every site, other than to link to related internal pages often without overdoing it and where appropriate.

What Are SERP Sitelinks?

When Google knows enough about the history or relationships of a website (or web page), it will sometimes display what are called site links (or mega site links) under the url of the website in question.

This results in an enhanced search snippet in SERPs.

This is normally triggered when Google is confident this is the site you are looking for, based on the search terms you used.

Sitelinks are usually reserved for navigational queries with a heavy brand bias, a brand name or a company name, for instance, or the website address.

I’ve tracked the evolution of Google site links in organic listings over the years, and they are seemly picked based on a number of factors.

How To Get Google Sitelinks?

Pages that feature in site links are often popular pages on your site, in terms of internal or external links, or user experience or even recent posts that may have been published on your blog.

Google likes to seem to mix this up a lot, perhaps to offer some variety, and probably to obfuscate results to minimise or discourage manipulation.

Sometimes it returns pages that leave me scratching my head as to why Google selected a particular page appears.

If you don’t HAVE site links, have a bit of patience and focus on other areas of your web marketing, like adding more content, get some PR or social activity focussed on the site.

Google WILL give you site links on some terms; ONCE Google is confident your site is the destination users want.

That could be a week or months, but the more popular the site is, the more likely Google will catch up fast.

Sitelinks are not something can be switched on or off, although you can control to some degree the pages are selected as site links. You can do that in Google Webmaster Tools AKA Search Console.

Link Out To Related Sites

Concerning on-page SEO best practices, I usually link out to other quality relevant pages on other websites where possible and where a human would find it valuable.

I don’t link out to other sites from the homepage. I want the Pagerank of the home page to be shared only with my internal pages. I don’t like out to other sites from my category pages either, for the same reason.

I link to other relevant sites (a deep link where possible) from individual pages and I do it often, usually. I don’t worry about link equity or PR leak because I control it on a page-to-page level.

This works for me, it allows me to share the link equity I have with other sites while ensuring it is not at the expense of pages on my domain. It may even help get me into a ‘neighbourhood’ of relevant sites, especially when some of those start linking back to my site.

Linking out to other sites, especially using a blog, also helps tell others that might be interested in your content that your page is ‘here’. Try it.

I don’t abuse anchor text, but I will be considerate, and usually try and link out to a site using keywords these bloggers / site owners would appreciate.

The recently leaked Quality Raters Guidelines document clearly tells web reviewers to identify how USEFUL or helpful your SUPPLEMENTARY NAVIGATION options are – whether you link to other internal pages or pages on other sites.

Broken Links Are A Waste Of Link Power

The simplest piece of advice I ever read about creating a website / optimising a website was years ago and it is still useful today:

QUOTE: “Make sure all your pages link to at least one other in your site

This advice is still sound today and the most important piece of advice out there in my opinion.

Check your pages for broken links. Seriously, broken links are a waste of link power and could hurt your site, drastically in some cases.

Google is a link-based search engine – if your links are broken and your site is chock full of 404s you might not be at the races.

Here’s the second best piece of advice, in my opinion, seeing as we are just about talking about website architecture;

QUOTE: “Link to your important pages often internally, with varying anchor text in the navigation and in page text content.”

Especially if you do not have a lot of Pagerank.

Does Only The First Link Count In Google?

Does the second anchor text link on a page count?

One of the more interesting discussions in the webmaster community of late has been trying to determine which links Google counts as links on pages on your site. Some say the link Google finds higher in the code, is the link Google will ‘count’ if there are two links on a page going to the same page.

I tested this (a while ago now) with the post Google counts The First Internal Link.

For example (and I am talking internally here – if you took a page and I placed two links on it, both going to the same page? (OK – hardly scientific, but you should get the idea).

Will Google only ‘count’ the first link? Or will it read the anchor text of both links, and give my page the benefit of the text in both links especially if the anchor text is different in both links? Will Google ignore the second link?

What is interesting to me is that knowing this leaves you with a question. If your navigation array has your main pages linked to in it, perhaps your links in content are being ignored, or at least, not valued.

I think links in body text are invaluable. Does that mean placing the navigation below the copy to get a wide and varied internal anchor text to a page?


As I said, I think this is one of the more interesting talks in the community at the moment and perhaps Google works differently with internal links as opposed to external; links to other websites.

I think quite possibly this could change day to day if Google pressed a button, but I optimise a site thinking that only the first link on a page will count – based on what I monitor although I am testing this – and actually, I usually only link once from page-to-page on client sites, unless it’s useful for visitors.

Duplicate Content

QUOTE: “there is no duplicate content penalty” Andrey Lipattsev, Google 2016

In the video above (June 17 2016), Google’s Andrey Lipattsev was adamant: Google DOES NOT have a duplicate content penalty.

It’s a good video for a refresher on the subject of duplicated content on your site.

He clearly wants people to understand it is NOT a penalty if Google discovers your content is not unique and doesn’t rank your page above a competitor’s page.

Also, as John Mueller points out, Google picks the best option to show users depending on who they are and where they are.  So sometimes, your duplicate content will appear to users where relevant.

This latest advice from Google is useful in that it clarifies Google’s position, which I quickly paraphrase below:

  • There is no duplicate content penalty
  • Google rewards UNIQUENESS and the signals associated with ADDED VALUE
  • Google FILTERS duplicate content
  • Duplicate content can slow Google down in finding new content
  • XML sitemaps are just about the BEST technical method of helping Google discover your new content
  • Duplicate content is probably not going to set your marketing on fire
  • Google wants you to concentrate signals in canonical documents, and it wants you to focus on making these canonical pages BETTER for USERS.
  • For SEO, it is not necessarily the abundance of duplicate content on a website that is the real issue. It’s the lack of positive signals that NO unique content or added value provides that will fail to help you rank faster and better in Google.

A sensible strategy for SEO would still appear to be to reduce Googlebot crawl expectations and consolidate ranking equity & potential in high-quality canonical pages and you do that by minimising duplicate or near-duplicate content.

A self-defeating strategy would be to ‘optimise’ low-quality or non-unique pages or present low-quality pages to users.

Google clearly says that the practice of making your text more ‘unique’, using low-quality techniques like adding synonyms and related words is:

QUOTE: “probably more counter productive than actually helping your website” John Mueller, Google

Duplicate Content SEO Best Practice

Webmasters are confused about ‘penalties’ for duplicate content, which is a natural part of the web landscape, because Google claims there is NO duplicate content penalty, yet rankings can be impacted negatively, apparently, by what looks like ‘duplicate content’ problems.

The reality in 2018 is that if Google classifies your duplicate content as THIN content, or MANIPULATIVE BOILER-PLATE or NEAR DUPLICATE ‘SPUN’ content, then you probably DO have a severe problem that violates Google’s website performance recommendations and this ‘violation’ will need ‘cleaned’ up – if – of course – you intend to rank high in Google.

Google wants us to understand, in 2018, that MANIPULATIVE BOILER-PLATE or NEAR DUPLICATE ‘SPUN’ content is NOT ‘duplicate content’.

Duplicate content is not necessarily ‘spammy’ to Google.

The rest of it is e.g:

QUOTE: “Content which is copied, but changed slightly from the original. This type of copying makes it difficult to find the exact matching original source. Sometimes just a few words are changed, or whole sentences are changed, or a “find and replace” modification is made, where one word is replaced with another throughout the text. These types of changes are deliberately done to make it difficult to find the original source of the content. We call this kind of content “copied with minimal alteration.” Google Search Quality Evaluator Guidelines March 2017

John Mueller of Google also clarified, with examples, that there is:

QUOTE: “No duplicate content penalty” but “We do have some things around duplicate content … that are penalty worthy” John Mueller, Google

More Reading:

Double or Indented Listings in Google

How do you get Double or Indented Listings in Google SERPs? How do you get two listings from the same website in the top ten results in Google instead of one (in normal view with 10 results).

Generally speaking, this means you have at least two pages with enough link equity to reach the top ten results – two pages very relevant to the search term. In 2018 however it could be a sign of Google testing different sets of results by for instance merging two indexes where a website ranks differently in both.

You can achieve this with relevant pages, good internal structure and of course links from other websites. It’s far easier to achieve in less competitive verticals but in the end is does come down in many cases to domain authority and high relevance for a particular keyphrase.

Some SERPs feature sites with more than two results from the same site. Often this is the result of what Google might call lower quality SERPs.

Create Useful 404 Pages

I mentioned previously that Google gives clear advice on creating useful 404 pages:

  1. Tell visitors clearly that the page they’re looking for can’t be found
  2. Use language that is friendly and inviting
  3. Make sure your 404 page uses the same look and feel (including navigation) as the rest of your site.
  4. Consider adding links to your most popular articles or posts, as well as a link to your site’s home page.
  5. Think about providing a way for users to report a broken link.
  6. Make sure that your webserver returns an actual 404 HTTP status code when a missing page is requested

It is incredibly important in 2018 to create useful and proper 404 pages. This will help prevent Google recording lots of autogenerated thin pages on your site (both a security risk and a rankings risk).

I sometimes use 410 responses for expired content that is never coming back.

A poor 404 page and user interaction with it, can only lead to a ‘poor user experience’ signal at Google’s end, for a number of reasons. I will highlight a poor 404 page in my audits and actually programmatically look for signs of this issue when I scan a site. I don’t know if Google looks at your site that way to rate it e.g. algorithmically determines if you have a good 404 page – or if it is a UX factor, something to be taken into consideration further down the line – or purely to get you thinking about 404 pages (in general) to help prevent Google wasting resources indexing crud pages and presenting poor results to searchers. I think rather that any rating would be a second order scoring including data from user activity on the SERPs – stuff we as SEO can’t see.

At any rate – I don’t need to know why we need to do something, exactly, if it is in black and white like:

QUOTE: “Create useful 404 pages Google, 2018


QUOTE:  “Tell visitors clearly that the page they’re looking for can’t be found. Use language that is friendly and inviting. Make sure your 404 page uses the same look and feel (including navigation) as the rest of your site. Consider adding links to your most popular articles or posts, as well as a link to your site’s home page. Think about providing a way for users to report a broken link. No matter how beautiful and useful your custom 404 page, you probably don’t want it to appear in Google search results. In order to prevent 404 pages from being indexed by Google and other search engines, make sure that your webserver returns an actual 404 HTTP status code when a missing page is requested.” Google, 2018

….. all that is need doing is to follow the guideline as exact as Google tells you to do it.

Ratings for Pages with Error Messages or No MC

Google doesn’t want to index pages without a specific purpose or sufficient main content. A good 404 page and proper setup prevents a lot of this from happening in the first place.

QUOTE: “Some pages load with content created by the webmaster, but have an error message or are missing MC. Pages may lack MC for various reasons. Sometimes, the page is “broken” and the content does not load properly or at all. Sometimes, the content is no longer available and the page displays an error message with this information. Many websites have a few “broken” or non-functioning pages. This is normal, and those individual non-functioning or broken pages on an otherwise maintained site should be rated Low quality. This is true even if other pages on the website are overall High or Highest quality.” Google

Does Google programmatically look at 404 pages?

We are told, NO in a recent hangout –  – but – in Quality Raters Guidelines “Users probably care a lot”.

Do 404 Errors in Search Console Hurt My Rankings?

QUOTE: “404 errors on invalid URLs do not harm your site’s indexing or ranking in any way.” John Mueller, Google

It appears this isn’t a once size fits all answer. If you properly deal with mishandled 404 errors that have some link equity, you reconnect equity that was once lost – and this ‘backlink reclamation’ evidently has value.

The issue here is that Google introduces a lot of noise into that Crawl Errors report to make it unwieldy and not very user-friendly.

A lot of broken links Google tells you about can often be totally irrelevant and legacy issues. Google could make it instantly more valuable by telling us which 404s are linked to from only external websites.

Fortunately, you can find your own broken links on site using the myriad of SEO tools available.

I also prefer to use Analytics to look for broken backlinks on a site with some history of migrations, for instance.

John has clarified some of this before, although he is talking specifically (I think) about errors found by Google in Search Console (formerly Google Webmaster Tools):

  1. In some cases, crawl errors may come from a legitimate structural issue within your website or CMS. How do you tell? Double-check the origin of the crawl error. If there’s a broken link on your site, in your page’s static HTML, then that’s always worth fixing
  2. What about the funky URLs that are “clearly broken?” When our algorithms like your site, they may try to find more great content on it, for example by trying to discover new URLs in JavaScript. If we try those “URLs” and find a 404, that’s great and expected. We just don’t want to miss anything important

If you are making websites and want them to rank, the 2015 and 2014 Quality Raters Guidelines document is a great guide for Webmasters to avoid low-quality ratings and potentially avoid punishment algorithms.

 301 Redirects Are POWERFUL & WHITE HAT

You can use 301 redirects to redirect pages, sub-folders or even entire websites and preserve Google rankings that the old page, sub-folder or websites enjoyed.

QUOTE:  “If you need to change the URL of a page as it is shown in search engine results, we recommend that you use a server-side 301 redirect. This is the best way to ensure that users and search engines are directed to the correct page.” Google, 2018


QUOTE: “301s happen at a page level so just because you see one 301 on one page of the old domain does not mean the entire domain has completely migrated.” Matt Cutts, Google


QUOTE: “We follow up to five redirect steps in a redirect chain if it’s a server-side redirect.” John Mueller, Google


QUOTE: “How do I move from one domain to another domain and try to preserve the rankings as best as possible?…do a 301 permanent redirect to the new location (assuming that you’re you’re moving for all time and eternity so this is the good case for a permanent or 301 redirect if you were planning to undo this later or it’s temporary then you’d use a 302 redirect)…. search engines should be able to follow the trail of all the 301 redirects” Matt Cutts, Google

Rather than tell Google via a 404, 410 or some other instruction that this page isn’t here anymore, you can permanently redirect an expired page to a relatively similar page to preserve any link equity that old page might have.

Redirecting multiple old pages to one new page works too if the information is there on the new page that ranked the old page. Pages should be thematically connected if you want the redirects to have a SEO benefit.

My general rule of thumb is to make sure the information (and keywords) on the old page are featured prominently in the text of the new page – stay on the safe side.

Most already know the power of a 301 redirect and how you can use it to power even totally unrelated pages to the top of Google for a time – sometimes a very long time.

You need to keep these redirects in place (for instance on a linux apache server, in your htaccess file) forever.

QUOTE: “So from our point of view, when we look at 301 redirects and permanent site move situations, we do expect that 301 redirect to be there for the long run

I WOULDN’T 301 REDIRECT users blindly to your home page if you expect any SEO benefits from 301s.

QUOTE: “So the 301 redirect from all pages to the home page, that would be something that we see as a soft 404s. John Mueller, Google

I’d also be careful of redirecting lots of low-quality links to one URL. If you need a page to redirect old URLs to, consider your sitemap or contact page. Audit any pages backlinks BEFORE you redirect them to an important page.

I’m seeing CANONICAL LINK ELEMENTS work just the same as 301s in 2018 – though they seem to take a little longer to have an impact, and Google can eventually ignore your canonicals if they are misused.

Hint – a good tactic at the moment is to CONSOLIDATE old, thin under-performing articles Google ignores, into bigger, better quality articles.

I usually then 301 all old pages to a single thematically related page to consolidate link equity and content equity. As long as the intention is to serve users and create content that is satisfying and more up-to-date – Google is OK with this.

QUOTE: “Anytime you do a bigger change on your website if you redirect a lot of URLs or if you go from one domain to another or if you change your site’s structure then all of that does take time for things to settle down so we can follow that pretty quickly we can definitely forward the signals there but that doesn’t mean that’ll happen from one day to next” John Mueller, Google 2016

Changing from HTTP to HTTPS? –  John Mueller said in a 2015 hangout to “make sure that you have both variations listed in Webmaster Tools” and essentially just set up the (301) redirect, set up the rel=canonical.

NOTE: Some Redirects May Be Treated As Soft 404 Errors

Can it be surmised that Google might label certain 301 redirects (that DO NOT redirect to VERY EQUIVALENT CONTENT) as SOFT 404 and so devalue all signals associated with them?

Will this be the same for e-commerce sites?

What Are Soft 404 Pages?


QUOTE: “soft 404 means that a URL on your site returns a page telling the user that the page does not exist and also a 200-level (success) code to the browser. (In some cases, instead of a “not found” page, it might be a page with little or no usable content–for example, a sparsely populated or empty page.)” Google, 2018

From my experience, not all soft 404 are created with enough volume to be a problem on most sites, but some auto-generated pages on some sites can produce soft 404 at a content level that with enough number that can cause indexation challenges.

QUOTE: “Returning a code other than 404 or 410 for a non-existent page (or redirecting users to another page, such as the homepage, instead of returning a 404) can be problematic. Firstly, it tells search engines that there’s a real page at that URL. As a result, that URL may be crawled and its content indexed. Because of the time Googlebot spends on non-existent pages, your unique URLs may not be discovered as quickly or visited as frequently and your site’s crawl coverage may be impacted (also, you probably don’t want your site to rank well for the search query” GOOGLE

However, Google will also treat certain mismatched or incorrect redirects as soft-404 type pages, too.

And this is a REAL problem in 2018, and a marked change from the way Google worked say ten years ago.

It essentially means that Google is not going to honour your redirect instruction and that means you are at risk of knobbling any positive signals you are attempting to transfer through a redirect.

You MUST Redirect To Equivalent Content in 2018

Sometimes it is useful to direct visitors from a usability point of view, but sometimes that usability issue will impact SEO benefits from old assets.

The message is out of date content can be a ‘bad user experience’ (it will depend on the query, of course) and ONLY REDIRECT content to EQUIVALENT CONTENT– ESPECIALLY if you have backlinks pointing to these pages and you want them to still count in 2018.

How To Use 301 Redirects Properly To Preserve Rankings in Google in 2018

I’ve stuck with the same method over the years when it comes to redirects.

If a PARTICULAR CANONICAL HEAD KEYWORD is IMPORTANT (even perhaps a SYNONYM or LONG TAIL VARIANT) and I think a particular 301 REDIRECT has some positive impact on how Google judges the quality or relevance the page, I will make sure the CANONICAL HEAD KEYWORD and SYNONYMS are on the FINAL PAGE I redirect Google to (which is the one that will be rated and cached).


If I want to boost that pages relevance for that KEYWORD at the center of any redirects, I will ensure the new page content is updated and expanded upon if it is of genuine interest to a user.

Also, note, Google has recently said certain redirects to your home page will also count as soft 404.

Here are a few tips when managing internal redirects on your site:

  • ensure redirects you employ designed to permanently move content are 301 redirects
  • minimise redirect chains (not only can Google be expected to only follow about 5 redirects in a chain, employing fewer redirects will speed up your site).
  • avoid redirecting any links to category pages unless extremely relevant, as per advice from Google
  • avoid redirecting links to NON-equivalent content
  • re-point existing redirected links to better, more equivalent content (as you are recommended to do this in the Panda document)
  • ensure redirected domains redirect through a canonical redirect and this too has any chains minimised, although BE SURE to audit the backlink profile for any redirects you point at a page as with reward comes punishment if those backlinks are toxic (another example of Google opening up the war that is technical seo on a front that isn’t, and in fact is converse, to building backlinks to your site).
  • redirect really out of date links (you absolutely want to keep) to pages of secondary importance which are as relevant as possible to your existing content.
  • to avoid throwing link equity away, you might create HIGH LEVEL IN-DEPTH TOPIC PAGES on your site and redirect (or use canonical redirects) any related expired content that HAVE INCOMING BACKLINKS, to this topic page (and keep it updated, folding content from old pages, where relevant and there is traffic opportunity, to create TOPIC pages that are focused on the customer e.g. information pages)
  • avoid unnecessary redirects in links in internal pages on the site
  • don’t expect Toolbar Pagerank to pass via https redirects, for instance, as Toolbar Pagerank is dead
  • a temporary or permanent redirect is only ‘in place’ as long as the instruction to do so REMAINS in place in, for instance, your htaccess file. Remove your redirect code, and a permanent redirect just became a temporary redirect and you can lose the benefit of the redirect
  • a 302 redirect left in place usually functions as a 301, although I still would not rely on a 302 to function as a 301, based on previous experience (so I make all my redirects 301)
  • In my experience, a 301 redirect is what most want to use, if a redirect is necessary
  • if you have no real reason to redirect a page, and no equivalent content to redirect it to, then serve a 404. or even better, a server response 410 (GONE FOREVER). You should only be implementing 301s if you have some signal of quality pointing to that particular URL that is worth consolidation.

Perhaps Google is giving those who will chase it another element to optimise to keep the traffic you already get, and succeed above others who will not take the measures, in this case, to optimise their redirects.

This may end up being another example where to optimise you need to dig down to a level where you need to learn how to be a htaccess / apache optimiser (which I am not, just as I am not a professional copywriter or CSS or mobile wizard).

Fortunately, you can still get away with keeping (whatever it is) as simple, fast and as canonical as possible and that is what I try and do.

Redirect Non-WWW To WWW (or Vice Versa)

QUOTE: “The preferred domain is the one that you would liked used to index your site’s pages (sometimes this is referred to as the canonical domain). Links may point to your site using both the www and non-www versions of the URL (for instance, and The preferred domain is the version that you want used for your site in the search results.” Google, 2018

Your site probably has canonicalisation issues (especially if you have an e-commerce website) and it might start at the domain level and this can exacerbate duplicate content problems on your website.

Simply put, can be treated by Google as a different URL than even though it’s the same page, and it can get even more complicated.

Its thought REAL Pagerank can be diluted if Google gets confused about your URLs and speaking simply you don’t want this PR diluted (in theory).

That’s why many, including myself, redirect non-www to www (or vice versa) if the site is on a Linux/Apache server (in the htaccess file –

Options +FollowSymLinks
RewriteEngine on
RewriteCond %{HTTP_HOST} ^ [NC]
RewriteRule ^(.*)$$1 [L,R=301]

Basically, you are redirecting all the Google juice to one canonical version of a URL.

In 2018 – this is a MUST HAVE best practice.

It keeps it simple when optimising for Google. It should be noted; it’s incredibly important not to mix the two types of www/non-www on site when linking your internal pages!

Note in 2018 Google asks you which domain you prefer to set as your canonical domain in Google Webmaster Tools.

QUOTE: “Note: Once you’ve set your preferred domain, you may want to use a 301 redirect to redirect traffic from your non-preferred domain, so that other search engines and visitors know which version you prefer.”

More Reading:

The Canonical Link Element Is VERY IMPORTANT

QUOTE: “I’ve got a slide here where I show I think 8 different URLs you know every single one of these URLs could return completely different content in practice we as humans whenever we look at ‘’ or just regular ‘’ or or we think of it as the same page and in practice it usually is the same page so technically it doesn’t have to be but almost always web servers will return the same content for like these 8 different versions of the URL so that can cause a lot of problems in search engines if rather than having your backlinks all go to one page instead it’s split between (the versions) and it’s a really big headache….how do people fix this well …. the canonical link element” Matt Cutts, Google

When it comes to Google SEO, the rel=canonical link element has become *VERY* IMPORTANT over the years and NEVER MORE SO.

QUOTE: “If your site contains multiple pages with largely identical content, there are a number of ways you can indicate your preferred URL to Google. (This is called “canonicalization”.)” Google

This element is employed by Google, Bing and other search engines to help them specify the page you want to rank out of duplicate and near duplicate pages found on your site, or on other pages on the web.

In the video above, Matt Cutts from Google shares tips on the new rel=”canonical” tag (more accurately – the canonical link element) that the 3 top search engines now support. If you want to know more, see how to use canonical tags properly.


Do I Need A Google XML Sitemap For My Website?

What is an XML sitemap and do I need one to SEO my site for Google?

QUOTE:  “(The XML Sitemap protocol) has wide adoption, including support from Google, Yahoo!, and Microsoft.

No. You do NOT, technically, need an XML Sitemap to optimise a site for Google if you have a sensible navigation system that Google can crawl and index easily.

QUOTE: “Some pages we crawl every day. Other pages, every couple of months.” John Mueller, Google

Some pages are more important than others to Googlebot.

HOWEVER – in 2018 – you should have a Content Management System that produces one as a best practice – and you should submit that sitemap to Google in Google Webmaster Tools. Again – best practice.

Google has said very recently XML and RSS are still a very useful discovery method for them to pick out recently updated content on your site.

An XML Sitemap is a file on your server with which you can help Google easily crawl & index all the pages on your site. This is evidently useful for very large sites that publish lots of new content or updates content regularly.

Your web pages will still get into search results without an XML sitemap if Google can find them by crawling your website if you:

  1. Make sure all your pages link to at least one other in your site
  2. Link to your important pages often, with (varying anchor text, in the navigation and in page text content if you want best results)

Remember – Google needs links to find all the pages on your site, and links spread Pagerank, that help pages rank – so an XML sitemap is never a substitute for a great website architecture.

QUOTE: Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site.”

Most modern CMS auto-generate XML sitemaps and Google does ask you submit a site-map in webmaster tools, and I do these days.

I prefer to define manually my important pages by links and depth of content, but an XML sitemap is a best practice in 2018 for most sites.

QUOTE: “We support 50 megabytes for a sitemap file, but not everyone else supports 50 megabytes. Therefore, we currently just recommend sticking to the 10 megabyte limit,” John Mueller, Google

Google wants to know when primary page content is updated, not when supplementary page content is modified – if the content significantly changes, that’s relevant. If the content, the primary content, doesn’t change,then I wouldn’t update it.

Read my article on how to get your entire website crawled and indexed by Google.

Enough Satisfying Website Information for the Purpose of the Website

QUOTE: “If a page has one of the following characteristics, the Low rating is usually appropriate…..(if) There is an unsatisfying amount of website information for the purpose of the website.” Google, 2017

Google wants evaluators to find out who owns the website and who is responsible for the content on it.

If you are a business in the UK – your website also needs to meet the legal requirements necessary to comply with the UK Companies Act 2007. It’s easy to just incorporate this required information into your footer.

QUOTE: “Companies in the UK must include certain regulatory information on their websites and in their email footers …… or they will breach the Companies Act and risk a fine. OUTLAW

Here’s what you need to know regarding website and email footers to comply with the UK Companies Act (with our information in bold);


  1. The Company Name –
    MBSA Marketing LTD (Trading as Hobo)
  2. Physical geographic address (A PO Box is unlikely to suffice as a geographic address; but a registered office address would – If the business is a company, the registered office address must be included.)
    MBSA Marketing LTD trading as Hobo,
    68 Finnart Street,
    PA16 8HJ.
    Scotland, UK
  3. the company’s registration number should be given and, under the Companies Act, the place of registration should be stated (e.g.
    MBSA Marketing LTD is a company registered in Scotland with company number SC536213
  4. email address of the company (It is not sufficient to include a ‘contact us’ form without also providing an email address and geographic address somewhere easily accessible on the site)
  5. The name of the organisation with which the customer is contracting must be given. This might differ from the trading name. Any such difference should be explained
    The domain and the Hobo logo and creative is owned by Shaun Anderson and licensed to MBSA Marketing LTD of which Shaun is an employed co-founding director.
  6. If your business has a VAT number, it should be stated even if the website is not being used for e-commerce transactions.
    VAT No. 249 1439 90
  7. Prices on the website must be clear and unambiguous. Also, state whether prices are inclusive of tax and delivery costs.
    All Hobo Web Co Uk prices stated in email or on the website EXCLUDE VAT


The above information does not need to feature on every page, more on a clearly accessible page. However – with Google Quality Raters rating web pages on quality based on Expertise, Authority and Trust (see my recent making high-quality websites post) – ANY signal you can send to an algorithm or human reviewer’s eyes that you are a legitimate business is probably a sensible move at this time (if you have nothing to hide, of course).

Note: If the business is a member of a trade or professional association, membership details, including any registration number, should be provided. Consider also the Distance Selling Regulations which contain other information requirements for online businesses that sell to consumers (B2C, as opposed to B2B, sales).

For more detailed information about the UK Companies:

Dynamic PHP Copyright Notice in WordPress

Now that your site complies with the Act – you’ll want to ensure your website never looks obviously out of date.

While you are editing your footer – ensure your copyright notice is dynamic and will change year to year – automatically.

It’s simple to display a dynamic date in your footer in WordPress, for instance, so you never need to change your copyright notice on your blog when the year changes.

This little bit of code will display the current year. Just add it in your theme’s footer.php and you can forget about making sure you don’t look stupid, or give the impression your site is out of date and unused, at the beginning of every year.

&copy; Copyright 2004 - <?php echo date("Y") ?>

A simple and elegant PHP copyright notice for WordPress blogs.

Rich Snippets

Rich Snippets and Schema Markup can be intimidating if you are new to them – but important data about your business can actually be very simply added to your site by sensible optimisation of your website footer.

This is easy to implement.

An optimised website footer can comply with law, may help search engines understand your site better and can help usability and improve conversions.

Properly optimised your website footer can also help you make your search snippet stand out in Google results pages:

Rich Snippets

Adding Markup to Your Footer

You can take your information you have from above and transform it with markup to give even more accurate information to search engines.

From this:

 <p> © Copyright 2006-2016 MBSA Marketing LTD trading as Hobo, Company No. SC536213 | VAT No. 249 1439 90 <br>
 68 Finnart Street, Greenock, PA16 8HJ, Scotland, UK | TEL: 0800 689 0293<br>
 Business hours are 09.00 a.m. to 17.00 p.m. Monday to Friday - Local Time is <span id="time">9:44:36</span> (GMT)

…to this:

 <div itemscope="" itemtype="">
 © Copyright 2006-2016 <span itemprop="name">Hobo</span>
 <div itemprop="address" itemscope="" itemtype="">
  <span itemprop="streetAddress">68 Finnart Street</span>,
  <span itemprop="addressLocality">Greenock</span>,
  <span itemprop="addressRegion">Scotland</span>,
  <span itemprop="postalCode">PA16 8HJ</span>,
  <span itemprop="addressCountry">GB</span> |
  TEL: <span itemprop="telephone">0800 689 0293</span> |
  EMAIL: <a href="" itemprop="email"></a>.
 <span itemprop="geo" itemscope="" itemtype="">
  <meta itemprop="latitude" content="55.9516">
  <meta itemprop="longitude" content="4.7725">
 <span>Company No. SC536213</span> |
 VAT No.<span itemprop="vatID">249 1439 90</span> |
 Business hours are <time itemprop="openingHours" datetime="Mo,Tu,We,Th,Fr 09:00-17:00">09.00 a.m. to 17.00 p.m. Monday to Friday</time>
 Local Time is <span id="time">9:46:20</span> (GMT)

 <span class="rating-desc" itemscope="" itemtype="">
  <span itemprop="name">Hobo Web SEO Services</span>
  <span itemprop="aggregateRating" itemscope="" itemtype=""> Rated <span itemprop="ratingValue">4.8</span> / 5 based on <span itemprop="reviewCount">11</span> reviews. | <a class="ratings" href="">Review Us</a> </span>

Tip: Note the code near the end of the above example, if you are wondering how to get yellow star ratings in Google results pages.

I got yellow stars in Google within a few days of adding the code to my website template – directly linking my site to information Google already has about my business.

Also – you can modify that link to to link directly to your REVIEWS page on Google Plus to encourage people to review your business.

Now you can have a website footer that helps your business comply with UK Law, is more usable, automatically updates the copyright notice year – and helps your website stick out in Google SERPs.

PRO Tip – Now you know the basics, consider implementing rich schema using a much cleaner method called JSON-LD!

More Reading

Keep It Simple, Stupid

Don’t Build Your Site With Flash, HTML Frames or any other deprecated technology.

Flash is a propriety plug-in created by Macromedia to infuse (albeit) fantastically rich media for your websites. The W3C advises you avoid the use of such proprietary technology to construct an entire site. Instead, build your site with CSS and HTML ensuring everyone, including search engine robots, can sample your website content. Then, if required, you can embed media files such as Flash in the HTML of your website.

QUOTE: “Chrome will continue phasing out Flash over the next few years, first by asking for your permission to run Flash in more situations, and eventually disabling it by default. We will remove Flash completely from Chrome toward the end of 2020.” Google Chrome, 2017

Flash, in the hands of an inexperienced designer, can cause all types of problems at the moment, especially with:

  • Accessibility
  • Search Engines
  • Users not having the Plug-In
  • Large Download Times

Flash doesn’t even work at all on some devices, like the Apple iPhone. Note that Google sometimes highlights if your site is not mobile friendly on some devices. And on the subject of mobile-friendly websites – note that Google has alerted the webmaster community that mobile friendliness will be a search engine ranking factor in 2018.

QUOTE: “Starting April 21 (2015), we will be expanding our use of mobile-friendliness as a ranking signal. This change will affect mobile searches in all languages worldwide and will have a significant impact in our search results. Consequently, users will find it easier to get relevant, high-quality search results that are optimized for their devices”. GOOGLE

Html5 is the preferred option over Flash these days, for most designers. A site built entirely in Flash will cause an unsatisfactory user experience and will affect your rankings in 2018, and especially in mobile search results. For similar accessibility and user satisfaction reasons, I would also say don’t build a site with website frames.

As in any form of design, don’t try and re-invent the wheel when simple solutions suffice. The KISS philosophy has been around since the dawn of design.

KISS does not mean boring web pages. You can create stunning sites with smashing graphics – but you should build these sites using simple techniques – HTML & CSS, for instance. If you are new to web design, avoid things like Flash and JavaScript, especially for elements like scrolling news tickers, etc. These elements work fine for TV – but only cause problems for website visitors.

Keep layouts and navigation arrays consistent and simple too. Don’t spend time, effort and money (especially if you work in a professional environment) designing fancy navigation menus if, for example, your new website is an information site.

Same with website optimisation – keep your documents well structured and keep your page Title Elements and text content relevant, use Headings tags sensibly and try and avoid leaving too much of a footprint – whatever you are up to.

How Fast Should Your Web Page Load?

QUOTE: “2 seconds is the threshold for ecommerce website acceptability. At Google, we aim for under a half second.” Maile Ohye, from Google

‘Site Speed’, we are told by Google is a ranking factor. But as with any factor Google confirms is a ranking signal, it’s usually a small, ‘nuanced’ one.

A fast site is a good user experience (UX), and a satisfying UX leads to higher conversions.

How fast your website loads is critical but it is often completely ignored by webmasters.

Very slow sites are a bad user experience – and Google is all about GOOD UX these days.

Screenshot 2015-05-14 01.02.07

How Much is ‘Website Speed’ a Google Ranking Factor?

‘How much is a very slow site a negative ranking factor’ is a more useful interpretation of the claim that ‘website speed is a Google ranking factor‘.

First – for I have witnessed VERY slow websites of 10 seconds and more negatively impacted in Google, and second, from statements made by Googlers:

QUOTE: “We do say we have a small factor in there for pages that are really slow to load where we take that into account.” John Mueller, GOOGLE

Google might crawl your site slower if you have a slow site. And that’s bad – especially if you are adding new content or making changes to it.

QUOTE: We’re seeing an extremely high response-time for requests made to your site (at times, over 2 seconds to fetch a single URL). This has resulted in us severely limiting the number of URLs we’ll crawl from your site.John Mueller, GOOGLE

John specifically said 2 seconds disrupts CRAWLING activity, not RANKING ability, but you get the picture.

How Fast Should Your Website Load in 2018?

My latest research would indicate as fast as possible.

A Non-Technical Google SEO Strategy

Here are some final thoughts:

  • Use common sense – Google is a search engine – it is looking for pages to give searchers results, 90% of its users are looking for information. Google itself WANTS the organic results full of information. Almost all websites will link to relevant information content so content-rich websites get a lot of links – especially quality links. Google ranks websites with a lot of links (especially quality links) at the top of its search engines so the obvious thing you need to do is ADD A LOT of INFORMATIVE CONTENT TO YOUR WEBSITE.
  • I think ranking in organic listings is a lot about trusted links making trusted pages rank, making trusted links making trusted pages rank ad nauseam for various keywords. Some pages can pass trust to another site; some pages cannot. Some links can. Some cannot. Some links are trusted enough to pass ranking signals to another page. Some are not. YOU NEED LINKS FROM TRUSTED PAGES IF YOU WANT TO RANK AND AVOID PENALTIES & FILTERS.
  • Google engineers are building an AI – but it’s all based on simple human desires to make something happen or indeed to prevent something. You can work with Google engineers or against them. Engineers need to make money for Google but unfortunately for them, they need to make the best search engine in the world for us humans as part of the deal. Build a site that takes advantage of this. What is a Google engineer trying to do with an algorithm? I always remember it was an idea first before it was an algorithm. What was that idea? Think “like” a Google search engineer when making a website and give Google what it wants. What is Google trying to give its users? Align with that. What does Google not want to give its users? Don’t look anything like thatTHINK LIKE A GOOGLE ENGINEER & BUILD A SITE THEY WANT TO GIVE TOP RANKINGS.
  • Google is a link-based search engine. Google doesn’t need content to rank pages but it needs content to give to users. Google needs to find content and it finds content by following links just like you do when clicking on a link. So you need first to make sure you tell the world about your site so other sites link to yours. Don’t worry about reciprocating to more powerful sites or even real sites – I think this adds to your domain authority – which is better to have than ranking for just a few narrow key terms.
  • Google has a long memory when it comes to links and pages and associations for your site. It can forgive but won’t forget. WHAT RELATIONSHIP DO YOU WANT TO HAVE WITH GOOGLE? Onsite, don’t try to fool Google. Be squeaky clean on-site and have Google think twice about demoting you for the odd discrepancies in your link profile.
  • Earn Google’s trustMAKE FRIENDS WITH GOOGLE
  • Don’t break Google’s trust – if your friend betrays you, depending on what they’ve done, they’ve lost trust. Sometimes that trust has been lost altogether. If you do something Google doesn’t like such as manipulate it in a way it doesn’t want, you will lose trust, and in some cases, lose all trust (in some areas). For instance, your pages might be able to rank, but your links might not be trusted enough to vouch for another site. DON’T FALL OUT WITH GOOGLE OVER SOMETHING STUPID
  • When Google trusts you it’s because you’ve earned its trust to help it satisfy its users in the quickest and most profitable way possible. You’ve helped Google achieve its goals. It trusts you and it will reward you with higher rankings. Google will list “friends” it trusts the most (who it knows to be reputable in a particular topic) at the top of SERPs.
  • Google is fooled and manipulated just like you can but it will kick you in the gonads if you break this trust. Treat Google as you would have it treat you.
  • Be fast.

REMEMBER IT TAKES TIME TO BUILD TRUST AND THAT IS PROBABLY ONE OF THE REASONS WHY GOOGLE is pushing for the need to be ‘trusted’ as a ranking factor.

I, of course, might be reading far too much into Google, TRUST and the TIME Google wants us to wait for things to happen on their end….but consider trust to be a psychological emotion Google is trying to emulate using algorithms based on human ideas.

If you do all the above, you’ll get more and more traffic from Google over time.

If you want to rank for specific keywords in very competitive niches, you’ll need to be a big brand, be picked out by big brands (and linked to), or buy links to fake that trust, or get spammy with it in an intelligent way you won’t get caught. Easier said than done.

I suppose Google is open to the con, for a while at least, just as any human is if it’s based on human traits.

Does Google Promote A Site Or Demote Others In Rankings?

In the video above you hear from at least one spam fighter that would confirm that at least some people are employed at Google to demote sites that fail to meet policy:

QUOTE: “I didn’t SEO at all, when I was at Google. I wasn’t trying to make a site much better but i was trying to find sites that were not ‘implementing Google policies'(?*) and not giving the best user experience.” Murat Yatağan, Former Google Webspam team

I think I see more of Google pulling pages and sites down the rankings (because of policy violation) than promoting them because of discovered ‘quality’.

I proceed thinking that in Google’s world, a site that avoids punishment algorithms, has verified independent links and has content favoured by users over time (which they are tracking) is a ‘quality page’ Google will rank highly.

So, for the long term, on primary sites, once you have cleaned all infractions up, the aim is to satisfy users by:

  • getting the click
  • keeping people on your site by delivering on purpose and long enough for them to terminate their search (without any techniques that would be frowned upon or contravene Google recommendations)
  • convert visitors to at least search terminators, returning visitors or actual sales

What You Should Avoid When Practicing Website Search Engine Optimisation

Google has a VERY basic organic search engine optimisation starter guide pdf for webmasters, which they use internally:

QUOTE: “Although this guide won’t tell you any secrets that’ll automatically rank your site first for queries in Google (sorry!), following the best practices outlined below will make it easier for search engines to both crawl and index your content.” Google, 2008

It is still worth a read, even if it is VERY basic, best practice search engine optimisation for your site.

No search engine will EVER tell you what actual keywords to put on your site to improve individual rankings or get more converting organic traffic – and in Google – that’s the SINGLE MOST IMPORTANT thing you want to know!

Google has updated their SEO starter guide for 2017, although this version is not in PDF format.

Google specifically mentions something very important in the guide:

QUOTE: “Creating compelling and useful content will likely influence your website more than any of the other factors discussed here.” Google SEO Starter Guide, 2017

This starter guide is still very useful for beginners to SEO.

I do not think there is anything in the 2017 guide that is really useful if you have been doing SEO since the last starter guide was first published (2008) and its first update was announced (2010). It still leaves out some of the more complicated technical recommendations for larger sites.

I usually find it useful to keep an eye on what Google tells you to avoid in such documents, which are:

  1. AVOID: “Don’t let your internal search result pages be crawled by Google. Users dislike clicking a search engine result only to land on another search result page on your site.”

  2. AVOID: “Allowing URLs created as a result of proxy services to be crawled.”

  3. AVOID: “Choosing a title that has no relation to the content on the page.”

  4. AVOID: “Using default or vague titles like “Untitled” or “New Page 1″.”

  5. AVOID: “Using a single title across all of your site’s pages or a large group of pages.”

  6. AVOID: “Using extremely lengthy titles that are unhelpful to users.”

  7. AVOID: “Stuffing unneeded keywords in your title tags.”

  8. AVOID: “Writing a description meta tag that has no relation to the content on the page.”

  9. AVOID: “Using generic descriptions like “This is a web page” or “Page about baseball cards”.”

  10. AVOID: “Filling the description with only keywords.”

  11. AVOID: “Copying and pasting the entire content of the document into the description meta tag.”

  12. AVOID: “Using a single description meta tag across all of your site’s pages or a large group of pages.”

  13. AVOID: “Placing text in heading tags that wouldn’t be helpful in defining the structure of the page.”

  14. AVOID: “Using heading tags where other tags like <em> and <strong> may be more appropriate.”

  15. AVOID: “Erratically moving from one heading tag size to another.”

  16. AVOID: “Excessive use of heading tags on a page.”

  17. AVOID: “Very long headings.”

  18. AVOID: “Using heading tags only for styling text and not presenting structure.”

  19. AVOID: “Using invalid ‘Structured Data ‘markup.”

  20. AVOID: “Changing the source code of your site when you are unsure about implementing markup.”

  21. AVOID: “Adding markup data which is not visible to users.”

  22. AVOID: “Creating fake reviews or adding irrelevant markups.”

  23. AVOID: “Creating complex webs of navigation links, for example, linking every page on your site to every other page.”

  24. AVOID: “Going overboard with slicing and dicing your content (so that it takes twenty clicks to reach from the homepage).”

  25. AVOID: “Having a navigation based entirely on images, or animations.”

  26. AVOID: “Requiring script or plugin-based event-handling for navigation”

  27. AVOID: “Letting your navigational page become out of date with broken links.”

  28. AVOID: “Creating a navigational page that simply lists pages without organizing them, for example by subject.”

  29. AVOID: “Allowing your 404 pages to be indexed in search engines (make sure that your web server is configured to give a 404 HTTP status code or – in the case of JavaScript-based sites – include a noindex robots meta-tag when non-existent pages are requested).”

  30. AVOID: “Blocking 404 pages from being crawled through the robots.txt file.”

  31. AVOID: “Providing only a vague message like “Not found”, “404”, or no 404 page at all.”

  32. AVOID: “Using a design for your 404 pages that isn’t consistent with the rest of your site.”

  33. AVOID: “Using lengthy URLs with unnecessary parameters and session IDs.”

  34. AVOID: “Choosing generic page names like “page1.html”.”

  35. AVOID: “Using excessive keywords like “baseball-cards-baseball-cards-baseballcards.htm”.”

  36. AVOID: “Having deep nesting of subdirectories like “…/dir1/dir2/dir3/dir4/dir5/dir6/page.html”.”

  37. AVOID: “Using directory names that have no relation to the content in them.”

  38. AVOID: “Having pages from subdomains and the root directory access the same content, for example, “” and “”.”

  39. AVOID: “Writing sloppy text with many spelling and grammatical mistakes.”

  40. AVOID: “Awkward or poorly written content.”

  41. AVOID: “Embedding text in images and videos for textual content: users may want to copy and paste the text and search engines can’t read it.”

  42. AVOID: “Dumping large amounts of text on varying topics onto a page without paragraph, subheading, or layout separation.”

  43. AVOID: “Rehashing (or even copying) existing content that will bring little extra value to users.”

  44. AVOID: “Having duplicate or near-duplicate versions of your content across your site.”

  45. AVOID: “Inserting numerous unnecessary keywords aimed at search engines but are annoying or nonsensical to users.”

  46. AVOID: “Having blocks of text like “frequent misspellings used to reach this page” that add little value for users.”

  47. AVOID: “Deceptively hiding text from users, but displaying it to search engines.”

  48. AVOID: “Writing generic anchor text like “page”, “article”, or “click here”.”

  49. AVOID: “Using text that is off-topic or has no relation to the content of the page linked to.”

  50. AVOID: “Using the page’s URL as the anchor text in most cases, although there are certainly legitimate uses of this, such as promoting or referencing a new website’s address.”

  51. AVOID: “Writing long anchor text, such as a lengthy sentence or short paragraph of text.”

  52. AVOID: “Using CSS or text styling that make links look just like regular text.”

  53. AVOID: “Using excessively keyword-filled or lengthy anchor text just for search engines.”

  54. AVOID: “Creating unnecessary links that don’t help with the user’s navigation of the site.”

  55. AVOID: “Using generic filenames like “image1.jpg”, “pic.gif”, “1.jpg” when possible—if your site has thousands of images you might want to consider automating the naming of the images.”

  56. AVOID: “Writing extremely lengthy filenames.”

  57. AVOID: “Stuffing keywords into alt text or copying and pasting entire sentences.”

  58. AVOID: “Writing excessively long alt text that would be considered spammy.”

  59. AVOID: “Using only image links for your site’s navigation”.

  60. AVOID: “Attempting to promote each new, small piece of content you create; go for big, interesting items.”

  61. AVOID: “Involving your site in schemes where your content is artificially promoted to the top of these services.”

  62. AVOID: “Spamming link requests out to all sites related to your topic area.”

  63. AVOID: “Purchasing links from another site with the aim of getting PageRank”

This is straightforward stuff but sometimes it’s the simple stuff that often gets overlooked. Of course, you combine the above together with the technical recommendations in Google guidelines for webmasters.

Don’t make these simple but dangerous mistakes…..

  1. Avoid duplicating content on your site found on other sites. Yes, Google likes content, but it *usually* needs to be well linked to, unique and original to get you to the top!
  2. Don’t hide text on your website. Google may eventually remove you from the SERPs.
  3. Don’t buy 1000 links and think “that will get me to the top!”. Google likes natural link growth and often frowns on mass link buying.
  4. Don’t get everybody to link to you using the same “anchor text” or link phrase. This could flag you as a ‘rank modifier’. You don’t want that.
  5. Don’t chase Google PR by chasing 100′s of links. Think quality of links….not quantity.
  6. Don’t buy many keyword rich domains, fill them with similar content and link them to your site. This is lazy and dangerous and could see you ignored or worse banned from Google. It might have worked yesterday but it sure does not work today without some grief from Google.
  7. Do not constantly change your site pages names or site navigation without remembering to employ redirects. This just screws you up in any search engine.
  8. Do not build a site with a JavaScript navigation that Google, Yahoo and Bing cannot crawl.
  9. Do not link to everybody who asks you for reciprocal links. Only link out to quality sites you feel can be trusted.

Don’t Flag Your Site With Poor Website Optimisation

A primary goal of any ‘rank modification’ is not to flag your site as ‘suspicious’ to Google’s algorithms or their webspam team.

I would recommend you forget about tricks like links in H1 tags etc. or linking to the same page 3 times with different anchor text on one page.

Forget about ‘which is best’ when considering things you shouldn’t be wasting your time with.

Every element on a page is a benefit to you until you spam it.

Put a keyword in every tag and you may flag your site as ‘trying too hard’ if you haven’t got the link trust to cut it – and Google’s algorithms will go to work.

Spamming Google is often counter-productive over the long term.


  • Don’t spam your anchor text link titles with the same keyword.
  • Don’t spam your ALT Tags or any other tags either.
  • Add your keywords intelligently.
  • Try and make the site mostly for humans, not just search engines.

On Page SEO is not as simple as a checklist any more of keyword here, keyword there. Optimisers are up against lots of smart folk at the Googleplex – and they purposely make this practice difficult.

For those who need a checklist, this is the sort of one that gets me results;

  1. Do keyword research
  2. Identify valuable searcher intent opportunities
  3. Identify the audience & the reason for your page
  4. Write utilitarian copy – be useful. Use related terms in your content. Use plurals and synonyms. Use words with searcher intent like “buy”, “compare”, “hire” etc. I prefer to get a keyword or related term in almost every paragraph.
  5. Use emphasis sparingly to emphasise the important points in the page whether they are your keywords are not
  6. Pick an intelligent Page Title with your keyword in it
  7. Write an intelligent meta description, repeating it on the page
  8. Add an image with user-centric ALT attribute text
  9. Link to related pages on your site within the text
  10. Link to related pages on other sites
  11. Your page should have a simple search engine friendly URL
  12. Keep it simple
  13. Share it and pimp it

You can forget about just about everything else.

The Continual Evolution of SEO

The ‘Keyword Not Provided’ incident is just one example of Google making ranking in organic listings HARDER – a change for ‘users’ that seems to have the most impact on ‘marketers’ outside of Google’s ecosystem – yes – search engine optimisers.

Now, consultants need to be page-centric (abstract, I know), instead of just keyword centric when optimising a web page for Google. There are now plenty of third-party tools that help when researching keywords but most of us miss the kind of keyword intelligence we used to have access to.

Proper keyword research is important because getting a site to the top of Google eventually comes down to your text content on a page and keywords in external & internal links. Altogether, Google uses these signals to determine where you rank if you rank at all.

There’s no magic bullet, to this.

At any one time, your site is probably feeling the influence of some algorithmic filter (for example, Google Panda or Google Penguin) designed to keep spam sites under control and deliver relevant, high-quality results to human visitors.

One filter may be kicking in keeping a page down in the SERPs while another filter is pushing another page up. You might have poor content but excellent incoming links, or vice versa. You might have very good content, but a very poor technical organisation of it.

You must identify the reasons Google doesn’t ‘rate’ a particular page higher than the competition

The answer is usually on the page or in backlinks pointing to the page. Ask yourself:

  • Do you have too few quality inbound links?
  • Do you have too many low-quality backlinks?
  • Does your page lack descriptive keyword rich text?
  • Are you keyword stuffing your text?
  • Do you link out to unrelated sites?
  • Do you have too many advertisements above the fold?
  • Do you have affiliate links on every page of your site and text found on other websites?
  • Do you have broken links and missing images on the page?
  • Does your page meet quality guidelines, legal and advertising stadards?

Whatever they are, identify issues and fix them.

Get on the wrong side of Google and your site might well be selected for MANUAL review – so optimise your site as if, one day, you will get that website review from a Google Web Spam reviewer.

The key to a successful campaign, I think, is persuading Google that your page is most relevant to any given search query. You do this by good unique keyword rich text content and getting “quality” links to that page.

The latter is far easier to say these days than actually do!

Next time you are developing a page, consider what looks spammy to you is probably spammy to Google. Ask yourself which pages on your site are really necessary. Which links are necessary? Which pages on the site are emphasised in the site architecture? Which pages would you ignore?

You can help a site along in any number of ways (including making sure your page titles and meta tags are unique) but be careful. Obvious evidence of ‘rank modifying’ is dangerous.

I prefer simple SEO techniques and ones that can be measured in some way. I have never just wanted to rank for competitive terms; I have always wanted to understand at least some of the reasons why a page ranked for these key phrases. I try to create a good user experience for humans AND search engines. If you make high-quality text content relevant and suitable for both these audiences, you’ll more than likely find success in organic listings and you might not ever need to get into the technical side of things, like redirects and search engine friendly URLs.

To beat the competition in an industry where it’s difficult to attract quality links, you have to get more “technical” sometimes – and in some industries – you’ve traditionally needed to be 100% black hat to even get in the top 100 results of competitive, transactional searches.

There are no hard and fast rules to long-term ranking success, other than developing quality websites with quality content and quality links pointing to it. The less domain authority you have, the more text you’re going to need. The aim is to build a satisfying website and build real authority!

You need to mix it up and learn from experience. Make mistakes and learn from them by observation. I’ve found getting penalised is a very good way to learn what not to do.

Remember there are exceptions to nearly every rule, and in an ever-fluctuating landscape, and you probably have little chance determining exactly why you rank in search engines these days. I’ve been doing it for over 15 years and every day I’m trying to better understand Google, to learn more and learn from others’ experiences.

It’s important not to obsess about granular ranking specifics that have little return on your investment unless you really have the time to do so! THERE IS USUALLY SOMETHING MORE VALUABLE TO SPEND THAT TIME ON.

That’s usually either good backlinks or great content.

How Does Google Search work?

QUOTE: “So there’s three things that you really want to do well if you want to be the world’s best search engine you want to crawl the web comprehensively and deeply you want to index those pages and then you want to rank or serve those pages and return the most relevant ones first….. we basically take PageRank as the primary determinant and the more PageRank you have that is the more people who link to you and the more reputable those people are the more likely it is we’re going to discover your page…. we use page rank as well as over 200 other factors in our rankings to try to say okay maybe this document is really authoritative it has a lot of reputation because it has a lot of PageRank … and that’s kind of the secret sauce trying to figure out a way to combine those 200 different ranking signals in order to find the most relevant document.” Matt Cutts, Google

The fundamentals of successful Google search optimisation have not changed much over the years.

Google isn’t lying about rewarding legitimate effort – despite what some claim. If they were, I would be a black hat full time. So would everybody else trying to rank in Google.

It is much more complicated to do SEO in some niches today than it was ten years ago.

The majority of small to medium businesses do not need advanced strategies because their direct competition has not employed these tactics either.

I took a medium-sized business to the top of Google recently for very competitive terms doing nothing but ensuring page titles were optimised, the home page text was re-written, one or two earned links from trusted sites.

This site was a couple of years old, a clean record in Google, and a couple of organic links already from trusted sites.

This domain had the authority and capability to rank for some valuable terms, and all we had to do was to make a few changes on the site, improve the depth and focus of website content, monitor keyword performance and tweak page titles.

There was a little duplicate content needing sorting out and a bit of canonicalisation of thin content to resolve, but none of the measures I implemented I’d call advanced.

A lot of businesses can get more converting visitors from Google simply by following basic principles and best practices:

  1. Always making sure that every page in the site links out to at least one other page in the site
  2. Link to your important pages often
  3. Link not only from your navigation but from keyword rich text links in text content – keep this natural and for visitors
  4. Try to keep each page element and content unique as possible
  5. Build a site for visitors to get visitors and you just might convert some to actual sales too
  6. Create keyword considered content on the site people will link to
  7. Watch which sites you link to and from what pages, but do link out!
  8. Go and find some places on relatively trusted sites to try and get some anchor text rich inbound links
  9. Monitor trends, check stats
  10. Minimise duplicate or thin content
  11. Bend a rule or two without breaking them and you’ll probably be ok

Once this is complete it’s time to … add more, and better content to your site and tell more people about it, if you want more Google traffic.

You might have to implement the odd 301, but again, it’s hardly advanced.

I’ve seen simple SEO marketing techniques working for years.

You are better off doing simple stuff better and faster than worrying about some of the more ‘advanced’ techniques you read on some blogs I think – it’s more productive, cost-effective for businesses and safer, for most.

Beware Pseudoscience In The SEO Industry 

QUOTE: “Pseudoscience is a claim, belief, or practice posing as science, but which does not constitute or adhere to an appropriate scientific methodology...” Wikipedia

Beware folk trying to bamboozle you with science. This isn’t a science when Google controls the ‘laws’ and changes them at will.

QUOTE: “They follow the forms you gather data you do so and so and so forth but they don’t get any laws they don’t haven’t found out anything they haven’t got anywhere yet maybe someday they will but it’s not very well developed but what happens is an even more mundane level we get experts on everything that sound like this sort of scientific expert they they’re not scientist is a typewriter and they make up something.”  Richard Feynman, Physicist

I get results by:

  • analysing Google rankings
  • performing Keyword research
  • making observations about ranking performance of your pages and that of others (though not in a controlled environment)
  • placing relevant, co-occurring words you want to rank for on pages
  • using synonyms
  • using words in anchor text in links on relevant pages and pointing them at relevant pages you want to rank high for the keyword in the anchor text
  • understanding what features in your title tag is what that page is going to rank best for
  • getting high-quality links from other trustworthy websites
  • publishing lots and lots of new, higher-quality content
  • focusing on the long tail of keyword searches
  • understanding it will take time to beat all this competition

I always expected to get a site demoted, by:

  • getting too many links with the same anchor text pointing to a page
  • keyword stuffing a page
  • trying to manipulate Google too much on a site
  • creating a “frustrating user experience.”
  • chasing the algorithm too much
  • getting links I shouldn’t have
  • buying links

Not that any of the above is automatically penalised all the time.

I was always of the mind I don’t need to understand the maths or science of Google, that much, to understand what Google engineers want.

The biggest challenge these days are to get trusted sites to link to you, but the rewards are worth it.

To do it, you probably should be investing in some marketable content, or compelling benefits for the linking party (that’s not just paying for links somebody else can pay more for). Buying links to improve rankings WORKS but it is probably THE most hated link building technique as far as the Google web spam team is concerned.

I was very curious about the science of optimisation I studied what I could but it left me a little unsatisfied. I learned that building links, creating lots of decent content and learning how to monetise that content better (while not breaking any major TOS of Google) would have been a more worthwhile use of my time.

Getting better and faster at doing all that would be nice too.

There are many problems with blogs, too, including mine.

Misinformation is an obvious one. Rarely are your results conclusive or observations 100% accurate. Even if you think a theory holds water on some level. I try to update old posts with new information if I think the page is only valuable with accurate data.

Just remember most of what you read about how Google works from a third party is OPINION and just like in every other sphere of knowledge, ‘facts’ can change with a greater understanding over time or with a different perspective.

Chasing The Algorithm

There is no magic bullet and there are no secret formulas to achieve fast number 1 ranking in Google in any competitive niche WITHOUT spamming Google.

A legitimately earned high position in search engines takes a lot of hard work.

There are a few less talked about tricks and tactics that are deployed by some (better than others) to combat algorithm changes, for instance, but there are no big secrets in modern SEO (no “white hat”  SEO secrets anyway).

There are clever strategies, though, and creative solutions to be found to exploit opportunities uncovered by researching the niche.

Note that when Google recognises a new strategy that gets results the strategy itself usually becomes ‘webspam‘ and something you can be penalised for that so I would beware jumping on the latest SEO fad in 2018.

The biggest advantage any one provider has over another is experience and resource. The knowledge of what doesn’t work and what will hurt your site is often more valuable than knowing what will give you a short-lived boost. Getting to the top of Google is a relatively simple process. One that is constantly in change. Professional SEO is more a collection of skills, methods and techniques. It is more a way of doing things, than a one-size-fits-all magic trick.

After over a decade practising and deploying real campaigns, I’m still trying to get it down to its simplest, most cost-effective processes.

I think it’s about doing simple stuff right.

Good text, simple navigation structure, quality links. To be relevant and reputable takes time, effort and luck, just like anything else in the real world, and that is the way Google want it.

If a company is promising you guaranteed rankings and has a magic bullet strategy, watch out.

I’d check it didn’t contravene Google’s guidelines.

How Long Does It Take To See Results from SEO?

QUOTE: “In most cases the SEO will need four months to a year to help your business first implement improvements and then see potential benefit.” Maile Ohye, Google 2017

Some results can be gained within weeks and you need to expect some strategies to take months to see the benefit. Google WANTS these efforts to take time. Critics of the search engine giant would point to Google wanting fast effective rankings to be a feature of Googles own Adwords sponsored listings.

Optimisation is not a quick process, and a successful campaign can be judged on months if not years. Most successful, fast ranking website optimisation techniques end up finding their way into Google Webmaster Guidelines – so be wary.

It takes time to build quality, and it’s this quality that Google aims to reward in 2018.

It takes time to generate the data needed to begin to formulate a campaign, and time to deploy that campaign. Progress also depends on many factors

  • How old is your site compared to the top 10 sites?
  • How many back-links do you have compared to them?
  • How is their quality of back-links compared to yours?
  • What the history of people linking to you (what words have people been using to link to your site?)
  • How good of a resource is your site?
  • Can your site attract natural back-links (e.g. you have good content or a great service) or are you 100% relying on your agency for back-links (which is very risky in 2018)?
  • How much unique content do you have?
  • Do you have to pay everyone to link to you (which is risky), or do you have a “natural” reason people might link to you?

Google wants to return quality pages in its organic listings, and it takes time to build this quality and for that quality to be recognised.

It takes time too to balance your content, generate quality backlinks and manage your disavowed links.

Google knows how valuable organic traffic is – and they want webmasters investing a LOT of effort in ranking pages.

Critics will point out the higher the cost of expert SEO, the more cost-effective Adwords becomes, but Adwords will only get more expensive, too. At some point, if you want to compete online, your going to HAVE to build a quality website, with a unique offering to satisfy returning visitors – the sooner you start, the sooner you’ll start to see results.

If you start NOW and are determined to build an online brand, a website rich in content with a satisfying user experience  – Google will reward you in organic listings.


Web optimisation is a marketing channel just like any other and there are no guarantees of success in any, for what should be obvious reasons. There are no guarantees in Google Adwords either, except that costs to compete will go up, of course.

That’s why it is so attractive – but like all marketing – it is still a gamble.

At the moment, I don’t know you, your business, your website, your resources, your competition or your product. Even with all that knowledge, calculating ROI is extremely difficult because ultimately Google decides on who ranks where in its results – sometimes that’s ranking better sites, and sometimes (often) it is ranking sites breaking the rules above yours.

Nothing is absolute in search marketing.

There are no guarantees – despite claims from some companies. What you make from this investment is dependent on many things, not least, how suited your website is to convert visitors into sales.

Every site is different.

Big Brand campaigns are far, far different from small business SEO campaigns that don’t have any links, to begin with, to give you but one example.

It’s certainly easier if the brand in question has a lot of domain authority just waiting to unlocked – but of course, that’s a generalisation as big brands have big brand competition too.

It depends entirely on the quality of the site in question and the level and quality of the competition, but smaller businesses should probably look to own their niche, even if limited to their location, at first.

Local SEO is always a good place to start for small businesses.

A Real Google Friendly Website

At one time A Google-Friendly website meant a website built so Googlebot could scrape it correctly and rank it accordingly.

When I think ‘Google-friendly’ these days – I think a website Google will rank top, if popular and accessible enough, and won’t drop like a f*&^ing stone for no apparent reason one day, even though I followed the Google SEO starter guide to the letter….. just because Google has found something it doesn’t like – or has classified my site as undesirable one day.

It is not JUST about original content anymore – it’s about the function your site provides to Google’s visitors – and it’s about your commercial intent.

I am building sites at the moment with the following in mind…..

  1. Don’t be a website Google won’t rank – What Google classifies your site as – is perhaps the NUMBER 1 Google ranking factor not often talked about – whether it Google determines this algorithmically or eventually, manually. That is – whether it is a MERCHANT, an AFFILIATE, a RESOURCE or DOORWAY PAGE, SPAM, or VITAL to a particular search – what do you think Google thinks about your website? Is your website better than the ones in the top ten of Google now? Or just the same? Ask, why should Google bother ranking your website if it is just the same, rather than why it would not because it is just the same…. how can you make yours different. Better.
  2. Think, that one day, your website will have to pass a manual review by ‘Google’ – the better rankings you get, or the more traffic you get, the more likely you are to be reviewed. Know that Google, at least classes even useful sites as spammy, according to leaked documents. If you want a site to rank high in Google – it better ‘do’ something other than exist only link to another site because of a paid commission. Know that to succeed, your website needs to be USEFUL, to a visitor that Google will send you – and a useful website is not just a website, with a sole commercial intent, of sending a visitor from Google to another site – or a ‘thin affiliate’ as Google CLASSIFIES it.
  3. Think about how Google can algorithmically and manually determine the commercial intent of your website – think about the signals that differentiate a real small business website from a website created JUST to send visitors to another website with affiliate links, on every page, for instance; or adverts on your site, above the fold, etc, can be a clear indicator of a webmaster’s particular commercial intent – hence why Google has a Top Heavy Algorithm.
  4. Google is NOT going to thank you for publishing lots of similar articles and near duplicate content on your site – so EXPECT to have to create original content for every page you want to perform in Google, or at least, not publish content found on other sites.
  5. Ensure Google knows your website is the origin of any content you produce (typically by simply pinging Google via XML or RSS) – I’d go as far to say think of using Google+ to confirm this too…. this sort of thing will only get more important as the year rolls on
  6. Understand and accept why Google ranks your competition above you – they are either:
    1. more relevant and more popular,
    2. more relevant and more reputable, or
    3. manipulating backlinks better than you.
    4. spamming
    Understand that everyone at the top of Google falls into those categories and formulate your own strategy to compete – relying on Google to take action on your behalf is VERY probably not going to happen.
  7. Being ‘relevant’ comes down to keywords & key phrases – in domain names, URLs, Title Elements, the number of times they are repeated in text on the page, text in image alt tags, rich markup and importantly in keyword links to the page in question. If you are relying on manipulating hidden elements on a page to do well in Google, you’ll probably trigger spam filters. If it is ‘hidden’ in on-page elements – beware relying on it too much to improve your rankings.
  8. The basics of GOOD SEO hasn’t changed for years – though effectiveness of particular elements has certainly narrowed or changed in type of usefulness – you should still be focusing on building a simple site using VERY simple SEO best practices – don’t sweat the small stuff, while all-the-time paying attention to the important stuff  – add plenty of unique PAGE TITLES and plenty of new ORIGINAL CONTENT. Understand how Google SEES your website. CRAWL it, like Google does, with (for example) Screaming Frog SEO spider, and fix malformed links or things that result in server errors (500), broken links (400+) and unnecessary redirects (300+). Each page you want in Google should serve a 200 OK header message.

Shaun Anderson…and that’s all for now.

This is a complex topic, as I said at the beginning of this in-depth article.

I hope you enjoyed this free DIY SEO guide for beginners. DO keep up to date with:

Google Webmaster Guidelines

You do not pay to get into search engines, and you don’t necessarily need to even submit your site to them, but you do need to know their ‘rules’ – especially rules lay down by Google.

Note; these rules for inclusion can and do change. These rules are official advice from Google to Webmasters, and Google is really cracking down on ‘low-quality’ techniques that influence their rankings in 2018.

I recently had cause to revisit the ever-expanding ‘Google Webmaster Guidelines’ documentation.

Below is a somewhat ordered table of the top 60 or so **OFFICIAL** Google Webmaster Guidelines documents you should be aware of in 2018 IF you want to ensure the long-term ‘health’ of your site in Google that leads to top rankings and more traffic.

Out with the 60 documents below, I have separated this important upcoming change in Google – namely ‘Mobile First Indexing‘ – that is going to be very important in the near future and your site NEEDS to be prepared for this big upcoming change.

Once you’ve read the above link, see below for the most important website ranking guidelines published by Google:

 RankGoogle Guideline or Support DocumentsSource
Guidance on building high-quality websitesView
Main webmaster guidelinesView
Quality Rater’s Guide March 14, 2017 (and previous years!)View
Link Schemes WarningView
Disavow Backlinks WarningView
Auto-Generated Content WarningView
Affiliate Programs AdviceView
Report spam, paid links, or malwareView
Reconsideration requestsView
List of common manual actionsView
Use rel=”nofollow” for specific linksView
Adding A Site To GoogleView
Browser Compatibility AdviceView
URL Structure AdviceView
Learn about sitemapsView
Duplicate ContentView
Use canonical URLsView
Indicate paginated contentView
Change page URLs with 301 redirectsView
How Google Deals With AJAXView
Review your page titles and snippetsView
Meta tags that Google understandsView
Image Publishing GuidelinesView
Video best practicesView
Flash and other rich media filesView
Learn about robots.txt filesView
Create useful 404 pagesView
Introduction to Structured DataView
Mark Up Your Content ItemsView
Schema GuidelinesView
Keyword Stuffing WarningsView
Cloaking WarningView
Sneaky Redirects WarningView
Hidden Text & Links WarningsView
Doorway Pages WarningsView
Scraped Content WarningsView
Malicious Behaviour WarningsView
Hacking WarningsView
Switching to HttpsView
User-Generated Spam WarningsView
Social EngineeringView
Malware and unwanted softwareView
Developing Mobile SitesView
Sneaky mobile redirectsView
Developing mobile-friendly pagesView
Use HTTP “Accept” header for mobileView
Feature phone sitemapsView
Multi-regional and multilingual sitesView
Use hreflang for language and regional URLsView
Use a sitemap to indicate alternate languageView
Locale-aware crawling by GooglebotView
Remove information from GoogleView
Move your site (no URL changes)View
Move your site (URL changes)View
How Google crawls, and serves resultsView
Ranking In GoogleView
Search Engine OptimisationView
Steps to a Google-friendly siteView
Webmaster FAQView
Check your site’s search performanceView

Did I miss any? Let me know.

The Google Webmaster Channel is also a useful resource for beginners to subscribe to.

Free SEO EBOOK (2016) PDF

Hobo UK SEO A Beginner's Guide V3 2015: by Shaun AndersonCongratulations! You’ve just finished reading the first chapter of my 2018 training guide. I am almost ready to update the Ebook for 2018.

Hobo UK SEO – A Beginner’s Guide (2016) is a free pdf ebook you can DOWNLOAD COMPLETELY FREE from here (2mb) that contains my notes about driving increased organic traffic to a site within Google’s guidelines.

I am based in the UK and most of my time is spent looking at so this ebook (and blog posts) should be read with that in mind.

Google is BIG – with many different country-specific search engines with wildly different results in some instances. I do all my testing on

It is a guide based on my 20 years experience.

I write and publish to my blog to keep track of thoughts and get feedback from industry and peers. As a result, of this strategy, I get about 100K visitors a month from Google.

My ebook is meandering – I am not a professional author or copywriter –  but contained within my ebook is the information I used to take a penalised site to record Google organic traffic levels.

This is the 4th version of this document I’ve published in 7 years and I hope this, and previous ones, have demonstrated my interest in this field in a way that others can learn from.


There are no warranties with this guide or any information you find on my site – it is a free pdf. This SEO training guide is my opinions, observations and theories that I have put into practice to rank this site for over 10 years. It is not ‘advice’.

I hope you find it useful and I hope beginners can get something out of the free ebook or the links to other high-quality resources it references.

Click here and subscribe to this blog for free updates and free updates to my free SEO ebook.


Disclaimer: “Whilst I have made every effort to ensure that the information I have provided is correct, It is not advice.; I cannot accept any responsibility or liability for any errors or omissions. The author does not vouch for third party sites or any third party service. Visit third party sites at your own risk.  I am not directly partnered with Google or any other third party. This website uses cookies only for analytics and basic website functions. This article does not constitute legal advice. The author does not accept any liability that might arise from accessing the data presented on this site. Links to internal pages promote my own content and services.” Shaun Anderson, Hobo

  • Get SEO Help!

    If you need any help with any seo-related project, please contact me to discuss your requirements. I have almost 20 years experience optimising websites from the smallest to the very largest sites.

    Free Quote