After seeing many posts on blogs and statements on seo company websites about the factor W3C compliance (ie valid html and css) plays in Google when it comes to rank your site in it’s search engine results pages, we thought we’d try out a series of tests to try and examine the role well formed pages (or pages with invalid code) actually plays in Google today.
We also thought: “how many other seo companies have actually tested this for themselves?”
We’re sure this won’t be the final test, and that each test will improve over time. Please get in touch if you think you know better and can contribute to the test.
SEO Mythbuster Test #1
“Can W3C compliance and accessibility impact your Search Engine Optimization?
…..the answer is definitely maybe. Ok well that wasnÃ¢â‚¬â„¢t much help but gives me the opportunity to go into a little more depth. From my experience having a site that is 100% code compliant doesnÃ¢â‚¬â„¢t give you any SEO benefit. That said throwing up a page with complete disregard for valid code is looking for trouble….. Depending on what your errors are you may have made it harder for a bot to crawl your website. However if you can get it down to handful of errors, it might not be worth the time obsessing over those last few details….”
I agree. I agree that nobody knows. And what’s more, no-one can know. And what’s more, if Google did apply, for instance, a “W3C Validation Check” filter to results pages, it’s only another filter or switch that one day could be turned up or down or dialed out completely from day to day like they do with every other influence on serps at the Googleplex.
So here’s the first SEO Mythbuster Test:
- 4 Pages on the hobo site
- New but Duplicate Content of Course
- All html pages
- All page names garbled letters to ensure no preference in Google. Will Google read the pages by alphabetical order, date created etc? Who knows – maybe we’ll find out this too.
- 1 Valid Html + CSS
- 1 Valid HTML + Invalid CSS
- 1 Invalid HTML + Valid CSS
- 1 Invalid HTML + Invalid CSS
- Anchor Text same text so as not to influence Google (of course this isn’t good for accessibility having links to different pages using the same text phrase)
- As far as we know we’ve tried to duplicate the pages including page titles and meta-descriptions etc. What this test should also test is will Google, today at any least, collect at least one page from this duplicate content or will it ignore them all.
- We assume every thing is equal apart from validation – that includes duplicate keyword descriptions, keywords and title elements etc.
- In this first test we’ll put it in a folder. This may make everything go supplemental but we can wait and seo ;)
So lets see what page Google likes best from this simple test. Google should follow these links below and read the pages, apply duplicate content filters to the pages, and pick one of the pages for it’s index? Right? Which one will it choose?
- link / link / link / link
Feel free to add a comment, theory or improvement about the test (we knocked it together quickly), or link to your own SEO myth buster test (we dofollow :) )….and remember this test (as any test) can be manipulated but more most actually seeing what happens will be of far greater value to fellow webmasters than screwing with it (we’re carrying out the exact same test with different content on a private site as well as the Hobo one just in case!).
It’ll take a couple of days hopefully for Google to read the page so any “heads up” to any obvious errors or failings in our test would be appreciated.
At the moment, we at Hobo think W3C compliance is the right way to go for visitor satisfaction at least. We’ll monitor results and post as soon as this simple test runs it’s course (so remember and subscribe). Hopefully we will have at least a qualitative test result on whether good W3C code is a factor in search engine optimisation (optimization) for the search engines.
Hey – Here’s the results