Are Keywords In URLS A Ranking Factor in Google?

Blog subscribers

QUOTE: “I believe that is a very small ranking factor. So it is not something I’d really try to force. And it is not something I’d say it is even worth your effort to restructure your site just so you can get keywords in your URL.” John Mueller, Google 2016

Keywords in URLS are a tiny ranking factor and user-friendly. Keep under 50 characters to display fully on desktop search. Long URLS are truncated. I use them on new sites. I avoid changing URLs on an otherwise perfectly functional website.

SEF URLs are user-friendly, relevant and easy to understand page-names on your website. Visible in a browser address bar.

Table of Contents

Are Keywords In URLs A Ranking Factor in Google?

Does Google count a keyword in the URL when ranking a page? YES. And it is USEFUL from the observations I have made over the years. Perhaps very important from a relevance point of view, in some circumstances.

It makes sense it carries some weight. It’s the name of an entire document.

In a recent video, John Mueller said that keywords in the URL were a ‘small ranking factor‘.  Having keywords in the URL does affect rankings for individual keyword phrases but it’s impact can only be seen on very long-tail searches.

I’ve always proceeded as if the keyword in a URL was a relevance signal.

Do You Need Keywords in URLs to Rank High In Google?

No.

I wouldn’t change a site structure JUST to change URLs to search engine friendly urls. Unfriendly URLs can be an indication of a poorly managed site. If a big clean up is called for I might consider starting from what is a basic SEO best practice and use SEF URLS.

How Long Should A URL Be?

50 characters. If the URL (including your full domain name) is longer it will be truncated in Google desktop SERPS. This admittedly doesn’t give you much to work with. Don’t worry about it. Keep the URL as short as possible.

Is There A Massive Rankings Difference in Google Rankings When You Changeover from Dynamic to Static Keyword-Rich URLs?

NO, but I HAVE detected benefits from using keyword-rich, SEO friendly URLs.

Google might reward the page some sort of RELEVANCE because of the actual file/page name.

On its own, this boost, in my experience is virtually non-detectable because of the noise from other more powerful ranking signals, except in a very granular sense (at individual keyword level) and is only apparent when you look specifically for it (and that was a while ago!).

Static Keyword URLs Can Have Value When Used In Anchor Text To Link To Your Site

Where this benefit is slightly more detectable is when another site links to your site with the URL as the link.

Then it is fair to say you do get a boost because keywords are in the actual anchor text link to your site, and I believe this is the case, but again, that depends on the quality of the page linking to your site – ie if Google trusts it and it passes PageRank (!) and anchor text relevance.

Sometimes I will remove the stop-words from a URL and leave the important keywords as the page title because a lot of forums garble a URL to shorten it.

I configure URLs the following way;

  1. www.hobo-web.co.uk/?p=292 — is automatically changed by the CMS using URL rewrite to
  2. www.hobo-web.co.uk/websites-clean-search-engine-friendly-URLs/ — which I then break down to something like
  3. www.hobo-web.co.uk/search-engine-friendly-URLs/

Note that although Googlebot can crawl sites with dynamic URLs it is far from a user friendly, accessible set-up.

As standard, I use clean search engine friendly URLs where possible on new sites these days, and try to keep the URLs as simple as possible and do not obsess about it.

That’s my aim at all times when I SEO. I try to keep things simple Google does look at keywords in the URL even at a granular level. Having a keyword in your URL might be the difference between your site ranking and not ranking for long-tail terms.

Google’s Advice on Static V Dynamic URLs

Google can read and index dynamic or static URLs just fine.

QUOTE: “MYTH: Dynamic URLs cannot be crawled. FACT: We can crawl dynamic URLs and interpret the different parameters.” Google

and

QUOTE: “Myth: Dynamic URLs are okay if you use fewer than three parameters. Fact: There is no limit on the number of parameters.” Google

Long-Term Test Results Tracking A Keyword Only Present In The URL and Not on the Page

I looked into one of my long-term tests.

It is sometimes hard to test Google without creating low-quality pages that will probably be treated differently than high-quality pages, so I wanted my tests to be on high-quality pages that were trusted by Google.

I don’t like using made-up words in my tests because Google can work differently in high-quality SERPs with lots of competition for a term, or conversely, on poorer quality SERPs.

At one time I ranked number 1 for the test focus keyphrase and with a good-quality exact match domain.

I have backlinks and a redirected, TRUSTED, CITED EXACT MATCH DOMAIN pointing to an INTERNAL page on a site where the focus keyword phrase I am interested in affecting is NOT on the page at all BUT is in the URL slug for the page. In effect, of course, that means that somewhere on the site in the underlying HTML there are pages that reference this page with that keyword in a URL slug. That might be pertinent.

The redirected domain itself is 10 years old and has powerful links. The page itself is a high-quality page with thousands of words and is 100% on TOPIC for the redirect – although it does not have the exact phrase I am focused on, anywhere on the page.

**IMPORTANT  TO NOTE – ANY reference of the focus keyphrase I am interested in, is wrapped in what I’ll call, for lack of a proper understanding of the marketing or technical speak, “the REDIRECT ZONE” e.g. the keyword phrase is not on the page, or in internal links to the page, or in backlinks to the page (at least, not in any backlinks that do NOT pass through the redirect zone which in this case is 301 redirects). IMPORTANT  TO NOTE**

I was very careful to keep the signal of this focus keyphrase WITHIN the redirect zone, and once it was I isolated I could, I determined, see what impact on rankings could be observed.

The focus keyphrase is a WORD + A NUMBER as, at a granular level, Google must put a lot of weight on ‘numbers’ that provide information about specific entities and this must be a powerful switch at an important level e.g. World Cup 1966 and World Cup 1962 are TWO TOTALLY DIFFERENT SERPS because of one character in a number.

It is evident that numbers, in instances like this, are powerful switches and indicators. The LAST place the signal was present on the new site, outside of any redirect zone, was in the URL SLUG of the focus test page e.g. in keywords in the URL:

Once the keyword was removed about a year ago, the rankings slowly disintegrated into nothing (out of the top 100 at least), although came back for a brief period at the beginning of this year (albeit on page 90 or something). I have an idea why that might be, but no evidence to offer at the moment.

TEST RESULTS

It would seem Google IS STILL INTERESTED in keywords in a URL SLUGS, as John Meuller indicates.

I proceed thinking keywords in the Url is a RANKING SIGNAL – and a VERY POWERFUL SWITCH at a quantum level (for instance, in my case study above) especially for longer tail searches.

A keyword in a URL on its own is, hardly, understandably, from a relevance point of view, a ranking signal that is easy to determine is ranking value – e.g. on its own it’s unlikely to get you into the top ten of results, but it could be used to unlock relevance contained in the redirect zone, which could be very important to some people managing 301 redirects and so ‘redirect zones’.

AND OF COURSE – Ranking signals, emphasis on the plural, combine to create ‘relevance’ and ‘context’, as far as Google is concerned.

FINAL THOUGHTS

Under your website structure there is quite possibly a redirect zone e.g. controlled with a htaccess file if like me, you are on an apache server.

Old URLs are redirected to new URLs, and old domains are redirected to new websites. In my recent tests, I think I see evidence of Google being very careful about what relevance is passed through 301 redirects.

In FACT, if particular keywords are NOT on in any element on the page which is the final destination for Google to cache, it might be the case that Google will NOT pass A LOT (maybe any) of the contextual relevance signal along the redirect to the final page.

Does that mean when Google thinks there is no reason to justify passing signals along a chain or redirects, no signal is passed along? Could this help both to insulate against negative SEO attacks, Google bombing and redirecting penalties to others?Is it an attack on black hat redirects?

Is it an attack on black hat redirect management? (If you don’t understand that means by now, you probably don’t need to understand that).

BEST PRACTICE

Context and Structure matter, and need to meet and merge, to get the most out of any signals positing at your site. Links, and so redirects (e.g. ‘structure’), must match context (and the signals on new cached pages).

As John M says, I wouldn’t rip apart a perfectly good site just to have search engine friendly URLs, but if you are building a new site, having search engine friendly URLs still provide some ranking bonus.

If you are redirecting ANYTHING, pay very close to the actual terms you are redirecting and ensure they are present on the new page, and not trapped in the redirect zone.

PS – You can recover keyword rankings from the redirect zone and where it gets interesting is when you start to put the signals back….one by one, to see the effects of individual ranking signals.

e.g:

Screenshot 2016-04-16 21.51.48

Hope it makes sense.

Which Is Best For Google – Directories or Files?

Sometimes I use directories and sometimes I use files.

I have not been able to determine if there is any actual benefit to using either above the other.

I prefer files like .html when I am building a new small site from scratch, as they are the ultimate end of the line for search engines as I visualise things – whereas a folder is a collection area, whether you have other files apart from the index or not.

I think it takes a little more to get a subfolder (within a domain) trusted than an individual file and I guess this sways me to use files on most websites we design. Once a site and it’s contents (files or subdirectory paths) are trusted, it’s 6 or half a dozen.

Subfolders can be treated differently than files that end in, for instance, .htm, in my experience. Some folders, if you don’t build links them or incorporate them properly into the site architecture, can be trusted less than other subfolders in your site or ignored entirely.

Historically subfolders (subdirectories) seem to take a little bit longer to get indexed by Google than straight files in some cases.

I have seen entire subdirectories of sites swept out of Google’s listings, usually because of consistent page quality issues.

People talk about trusted domains but they don’t mention or don’t know not ALL the domain has the same amount of trust.

Google treats some folders….. differently.

Probably dependent on where links are coming from – and page quality issues, of course. Is this folder starved of links when the rest of the site has hundreds?

Google might take a while to get to know it and trust it. Matt Cutts is now on record about taking action on particular areas of a site, of course, from when I originally wrote this article.

Some say don’t go beyond 4 levels of folders in a URL setup. I haven’t experienced any issues, ever, on that front, but you never know.

Which Is Best? – Absolute Or Relative URLs

My advice to any website designer would be to, above all, keep it consistent.

I prefer absolute URLs. That’s just a preference. Google doesn’t care so neither do I, really. I have just gotten into the habit of using absolute URLs.

  • What is an absolute URL? Example – “https://www.hobo-web.co.uk/search-engine-optimisation/”
  • What is a relative URL? Example – “/search-engine-optimisation/”

Relative just means relative to the document the link is on.

Move that page to another site and it won’t work.

With an absolute URL, it would work.

Does Google Remember The History of Important URLs?

It has seemed that way in the past.

If Google remembers every single historic version of a URI (or page on your website) – then why? I wonder if it might use such knowledge to work things to rank you by – like out how much Google trusts you, for instance, or rate the quality improvements to a page over time.

Google has an incredible amount of historical data about your actions on your pages.

For instance.

  • Can even the most minor changes to a page indicate your intent?
  • Can it build a picture over time about what you are up to?
  • Can it be a measurement your site can be held accountable for?
  • Could it be a metric which might affect page and site rankings over time?
  • In a positive, or negative, fashion?

Is, at any level, Google comparing one version of your page with the page you just changed, or started with, or even just the number of times you’ve modified it?

Exact Match Domain Name Ranking Benefit

Does having a keyword in the URL of your web address improve rankings in Google? An exact match domain name is a web address with the same EXACT words in it that make up a popular search – like “bingo.com” or “bingo.co.uk”.

The answer is yes it can help. Exact match – and partial match – domains are just NOWHERE near as powerful than they once were.

An exact match domain name with low-quality spammy links and keyword stuffed text USED to outrank a real site with thousands of natural links for that term.

For a long time.

But things have changed a lot over the past few years.

Having a low-quality long-tail keyword variation exact match domain is no substitute for having a brand in my opinion, and never was.

An exact match domain might help you rank for that term, but if you want thousands of visitors a day, you need a breadth of keywords to keep a business running, not just one long-tail key phrase and a few variants.

Do exact match domains still work?

Yes. Nowhere as near as powerful as they once were, though.