SEO Friendly URLs (or search engine friendly URLs) are clean, relevant and easy to understand addresses of pages on your website – and the destination you see in the browser address bar on most modern browsers.
Do you need them?
You do not need clean URLs in site architecture for Google to spider a site successfully (23/9/08 – confirmed by Google), but they can be beneficial, and I DO use clean URLs wherever I get the choice.
While I wouldn’t necessarily rip apart a site structure JUST to change URLs, often poorly designed website page URLs are a sign of other, less obvious sloppy SEO, and if a big clean up is called for, then I would consider starting from what is a basic SEO best practice.
Is there a massive difference in Google when you use clean URLs?
NO, not massive, but I HAVE seen benefits from using keyword rich, SEO friendly URLs.
The thinking is that you might get a boost in Google SERPs if your URLs are clean – because you are using keywords in the actual page name instead of a parameter or ID number.
Google might reward the page some sort of RELEVANCE because of the actual file / page name.
On its own, this boost, in my experience is virtually non-detectable, except in a very granular sense (at individual keyword level) and only obvious when you look specifically for it.
Where this benefit is slightly more detectable is when people (say in forums) link to your site with the url as the link.
Then it is fair to say you do get a boost because keywords are in the actual anchor text link to your site, and I believe this is the case, but again, that depends on the quality of the page linking to your site – ie if Google trusts it and it passes Page Rank (!) and anchor text relevance.
Sometimes I will remove the stop-words from a url and leave the important keywords as the page title because a lot of forums garble a url to shorten it.
I configure URLs the following way;
- www.hobo-web.co.uk/?p=292 — is automatically changed by the CMS using url rewrite to
- www.hobo-web.co.uk/websites-clean-search-engine-friendly-URLs/ — which I then break down to something like
It should be remembered it is understood although Googlebot can crawl sites with dynamic URLs, it is assumed by many Webmasters there is a greater risk that it will give up if the URLs are deemed not important and contain multiple variables and session IDs (theory) – and in 2017 – that sort of set up is sure-fire Google Panda material if not managed properly.
As standard, I use clean search engine friendly URLs where possible on new sites these days, and try to keep the URLs as simple as possible and do not obsess about it.
That’s my aim at all times when I SEO. I try to keep things simple
Be aware though – Google does look at keywords in the URL even in a granular level.
Having a keyword in your URL might be the difference between your site ranking and not for long tail terms.
SAYING THAT – there are MORE IMPORTANT areas to focus on than this.
Does Google Count A Keyword In The URL (Filename) When Ranking A Page?
Here is a simple observation I made a while back and test every now and again.
Does Google Count A Keyword In The URL When Ranking A Page?
The answer to whether a keyword in the URI makes a difference to whether a page ranks or not for a query, in Google organic results, is YES.
And it’s it may be IMPORTANT from what I can see with other observations I have made.
Perhaps very important from a relevance point of view – though that is NOT WHAT MOST SEO SEEM TO THINK OF THIS – I didn’t until I looked for myself.
Think of it though – that makes sense. It’s the name of a whole document. It might be more important than the keyword on the page in plain text – that is, it might be more RELEVANT to Google. See my next screen shot.
- Observation – the keyword in the URI outranks a page with the keyword in the text content. Of course, this is just one test page, and positions can change with time due to any number of things. Why it’s important is another matter. For instance, does Google count this as a link – it IS cited on another document, albeit not in the anchor text of that link? Or is it just the keyword use itself?
Anyways – placing keywords in your URI is important as well all know, but the keywords in the URI – ie the filename of the document – don’t need to be on the page at all.
Does Yahoo count the keyword in the URI? Apparently so….
And so does Bing by the look of it……
Which Is Best For Google – Directories or Files?
Sometimes I use directories and sometimes I use files.
I have not been able to determine if there is any actual benefit to using either above the other.
I prefer files like .html when I am building a new small site from scratch, as they are the ultimate end of the line for search engines as I visualise things – whereas a folder is a collection area, whether you have other files apart from the index or not.
I think it takes a little more to get a subfolder (within a domain) trusted than an individual file and I guess this sways me to use files on mosts websites we design. Once a site and it’s contents (files or subdirectory paths) are trusted, it’s 6 or half a dozen.
Subfolders can be treated differently than files that end in, for instance, .htm, in my experience. Some folders, if you don’t build links them or incorporate them properly into the site architecture, can be trusted less than other subfolders in your site or ignored entirely.
Historically subfolders (sub directories) seem to take a little bit longer to get indexed by Google than straight files in some cases.
I have seen entire subdirectories of sites swept out of Google’s listings, usually because of consistent page quality issues.
People talk about trusted domains but they don’t mention or don’t know not ALL the domain has the same amount of trust.
Google treats some folders….. differently.
Probably dependent on where links are coming from – and page quality issues, of course, in 2017. Is this folder starved of links when the rest of the site has hundreds?
Google might take a while to get to know it, and trust it. Matt Cutts is now on record about taking action on particular areas of a site, of course, from when I originally wrote this article.
Some say don’t go beyond 4 levels of folders in a url setup. I haven’t experienced any issues, ever, on that front, but you never know.
Which Is Best? – Absolute Or Relative URLs
My advice to any website designer would be to, above all, keep it consistent.
I prefer absolute URLs. That’s just a preference. Google doesn’t care so neither do I, really. I have just gotten into the habit of using absolute URLs.
- What is an absolute URL? Example – “https://www.hobo-web.co.uk/search-engine-optimisation/”
- What is a relative URL? Example – “/search-engine-optimisation.htm”
Relative just means relative to the document the link is on.
Move that page to another site and it won’t work.
With an absolute URL, it would work.
Does Google Remember Every Version Of A URL, Ever?
I don’t see many people taking about what Google remembers. If trust is such a big thing in Google, just like humans, it needs to:
- find you,
- know you and
- remember your actions,
…to build real trust.
I’ve heard other folk in forums muse a similar thing before – especially about links, but for the first time in a while, using a variation of the site: operator:
- 4 pages in Google SERPs,
- on one DC,
- all edited at different times,
- all the EXACT SAME URI, and seemingly just
- 4 historic, slightly different versions of the same exact URI.
Is this an indication Google remembers every page and it’s history?
If Google remembers every single historic version of a URI (or page on your website) – then why? I wonder if it might use such knowledge to work things to rank you by – like out how much Google trusts you, for instance.
If it does have that incredible amount of historical data about your actions, surely somebody at Google is tasked with putting this knowledge to use in some way.
- Can even the most minor changes to a page indicate your intent?
- Can it build a picture over time about what you are up to?
- Can it be a measurement your site can be held accountable for?
- Could it be a metric which might affect page and site rankings over time?
- In a positive, or negative, fashion?
Is, at any level, Google comparing one version of your page with the page you just changed, or started with, or even just the number of times you’ve modified it?
If this was the case, can you even use this information to your benefit? Of course, thoughts would need to be put into what was an indication of potentially manipulative intent….
- Google, Yahoo & Bing SEEMS to consistently rank a page in a search for a keyword that is only present in the file name (URL) and not present on the page itself or anywhere else, and that’s what I am seeing in a lot of places.
I’ve a few more I will publish. The aim of this is geek fun – it’s a search to find out how to make a page AS RELEVANT AS POSSIBLE so Google will rank it well, on a very basic level.
I already know how to make a page relevant, and I don’t sweat the small stuff. But I always like to keep an eye open for little hints like this observation MAY illustrate – because it is not JUST ALL ABOUT INCOMING BACKLINKS for a search engine demanding relevance.
Don’t just take my word for this or the recent meta description observation. See if this is true with your pages.
Of course – you don’t NEED to have a keyword in the URI. But search engines will count them if you do. Perhaps search engines place more emphasis on other regions of a page if keywords are not used in a URI. Who knows….?
Google Promotes Uncool URLs?
Google recently gave more assistance to webmasters, if you can call it that, concerning url rewriting, or changing dynamic variable filled URL to more search engine friendly, more human readable static looking URLs (or URI).
They actually could be interpreted as recommending not to rewrite a website URLs, because there is a chance you could screw things up.
They busted some ‘myths’ too;
- Myth: “Dynamic URLs cannot be crawled.” (knew that)
- Myth: “Dynamic URLs are okay if you use fewer than three parameters.” (thought that)
I’ve mentioned before having a keyword in a url on its own has a minuscule, if any, effect on the ranking of a page but may have some benefits when people use the url to link to the site (I think it does). Having a keyword in the url may be a signal of some sort of relevence for an engine in 2010 – see this test – does Google count keywords in the URL?
I do see what Google is doing – they are telling people ‘Google can read dynamic URLs’ – that’s what I will take from the post…. but only the most ignorant SEO doesn’t know that already.
It’s not exactly in line with what the W3C recommends, from what I can determine.
In Cool URIs Don’t Change, they determined a SEF url was more user friendly, now and in the long term, for humans. Some may say W3C advice is outdated, or trite, but I still try and follow it where I can. I still believe the best method for constructing URLs is short and to the point – human readable preferably. If you go through a site CMS change, you can rewrite to keep old URLs.
Of course, sometimes its hard to follow even the best advice, but it’s always worth remembering and trying in the end to achieve usability, accessibility and visibility.
I would still recommend rewriting URLs, despite this post from the Google Webmaster Team. Then again, this advice is more usability orientated than a search engine optimisation benefit.
…and interestingly, the Google Webmaster Blog seems to produce SEF Urls and it’s worth pointing out – Google is not the only search engine.
Exact Match Domain Names Ranking Benefit
Does having a keyword in the url of your web address improve rankings in Google?
An exact match domain name is a web address with the same EXACT words in it that make up a popular search – like “bingo.com” or “bingo.co.uk”.
The answer is yes it can help.
But it can also get penalised a lot faster, in 2017, too.
An exact match domain name with low-quality spammy links and keyword stuffed text USED to outrank a real site with thousands of natural links for that term.
For a long time.
But things have changed a little over the past few years.
Having a low-quality long tail keyword variation exact match domain is no substitute for having a brand in my opinion, and never was.
An exact match domain might help you rank for that term, but if you want thousands of visitors a day, you need a breadth of keywords to keep a business running, not just one long-tail key phrase and a few variants.
Do exact match domains still work in 2017?
Microsites & Minisites Can Still Rank High in Google, Bing & Yahoo
NOTE – Google actively hunts out low-quality domains and does a slightly better job of keeping low-quality exact match minisites from it’s results, but it’s clearly still used to success in some verticals.
Microsites & Minisites – A good idea, SOME of the time!
Vanessa Fox mentioned building microsites (smaller sites based around one keyword) are a bad idea most of the time.
However, the opportunities Google, and all the major search engines, afford exact match domains, makes it silly to discount them as part of a campaign if you are willing to put the extra time and resources into making a good microsite.
Microsites still rank (albeit for long-tail terms)
I do not use microsites like these for link-building – they are all independent entities.
Some low-quality SEO companies use microsites as a low-quality link building strategy back to a main domain – that stuff gets you penalised for even your brand name! Believe me.
The ROI depends on how many resources you have to pump these out and manage them, and how much healthy competition there is in the niche.
I ALWAYS recommend one main brand domain – I’d never recommend (to most of the types of clients I deal with), using a generic keyword domain if you want to build a brand.
To give you an example if I had ONE store selling 20 products to spend ALL my time on, I might focus on one main branded site, and 20 microsites around each product on an exact match domain.
I’d then promote the s*&^ out of them in multiple channels.
Then again, I would have the resources to manage all that of course.
I don’t use microsites to sell SEO services, for instance, – but many do.
As Google relies more on more on domain authority, it’s got to keep letting the little guy in *somewhere* and they take the easy way out by letting exact match domains in.
I think quality niche sites are going to be even MORE rewarded in the years to come.
Exact match domains are AMAZING for products and some services – so they can be a good idea, all the time, and are only less effective for companies and brand building.
Vanessa knows her stuff – but Microsites, when operated intelligently, can be very valuable indeed.
Horses for courses. As usual.