72% Were Hit By Google’s Penguin 1.0 Or 2.0 Algorithm
Google’s 2.0 Penguin update went live over 3 months ago and we ran a poll shortly after asking how many of you were impacted by this Penguin update.
We had almost 1…
Video: Most Common Google AdWords Violations
Google’s Courtney Pannell posted in the Google AdWords Help forums a video hangout between three Googlers…
Poll: Does A Manual Action Removal Impact Google Rankings?
I see many threads with complaints, excitement, sadness and fear in the forums. Most have to do with Google rankings. It goes like this… The webmaster says something like, “my rankings and traffic from Google tanked,” now I look and I have…
…
Bing Deep Links Directly In Search Box
Bing is apparently testing showing deep links, what Google calls sitelinks, directly in the Bing search box, without going to the search results page.
Aaron Wall spotted this and posted a picture on Twitter…
Mastering PPC: Five Reasons To Run PPC For Brand Keywords
Amanda wrote a great post last week on advanced bidding strategies. I wanted to follow up with an answer to a common client question about billing. Clients ask all the time, why should they bid on their own brand keywords when they rank highly in the organic results for their brand keywords? Isn’t this wasting […]
Live @ SMX East: Leveraging Big Data In Search Marketing
Big data. You’ve undoubtedly heard this often-used phrase, generally paired with grandiose visions of how mining big data’s mother lodes holds the key to tackling big problems – solving climate change, realizing world peace, ending disease and suffering… you’ve heard…
Please visit Search Engine Land for the full article.
International SEO: Top-Level Domains, Subdomains & Subdirectories
When getting started in international SEO, one of the most important items of an optimal strategy includes how to properly structure your domain name and URL to signify the country the website is targeting and/or language the website is in.
‘Grand Theft Auto V’ Videos Stealing the Most Buzz Among Gamers [Study]
A new report looks at some key battles in video games, smartphones, tablets, and web browsers in the run-up to the holiday season, identifying which brands’ commercials are attracting the most attention online and which desperately need an upgrade.
Comparing Rank-Tracking Methods: Browser vs. Crawler vs. Webmaster Tools
Posted by Dr-Pete
Deep down, we all have the uncomfortable feeling that rank-tracking is unreliable at best, and possibly outright misleading. Then, we walk into our boss’s office, pick up the phone, or open our email, and hear the same question: “Why aren’t we #1 yet?!â€� Like it or not, rank-tracking is still a fact of life for most SEOs, and ranking will be a useful signal and diagnostic for when things go very wrong (or very right) for the foreseeable future.
Unfortunately, there are many ways to run a search, and once you factor in localization, personalization, data centers, data removal (such as [not provided]), and transparency (or the lack thereof), it’s hard to know how any keyword really ranks. This post is an attempt to compare four common rank-tracking methods:
- Browser – Personalized
- Browser – Incognito
- Crawler
- Google Webmaster Tools (GWT)
I’m going to do my best to keep this information unbiased and even academic in tone. Moz builds rank-tracking tools based in part on crawled data, so it would be a lie to say that we have no skin in the game. On the other hand, our main goal is to find and present the most reliable data for our customers. I will do my best to present the details of our methodology and data, and let you decide for yourselves.
Methodology
We started by collecting a set of 500 queries from Moz.com’s Google Webmaster Tools (GWT) data for the month of July 2013. We took the top 500 queries for that time period by impression count, which provided a decent range of rankings and click-through rates. We used GWT data because it’s the most constrained rank-tracking method on our list – in other words, we needed keywords that were likely to pop up on GWT when we did our final data collection.
On August 7th, we tracked these 500 queries using four methods:
(1) Browser – Personalized
This is the old-fashioned approach. I personally entered the queries on Google.com via the Chrome browser (v29) and logged into my own account.
(2) Browser – Incognito
Again, using Google.com on Chrome, I ran the queries manually. This time, though, I was fully logged out and used Chrome’s incognito mode. While this method isn’t perfect, it seems to remove many forms of personalization.
(3) Crawler
We modified part of the MozCast engine to crawl each of the 500 queries and parse the results. Crawls occurred across a range of IP addresses (and C-blocks), selected randomly. The crawler did not emulate cookies or any kind of login, and we added the personalization parameter (“&pws=0�) to remove other forms of personalization. The crawler also used the “&near=us� option to remove some forms of localization. We crawled up to five pages of Google results, which produced data for all but 12 of the 500 queries (since these were queries for which we knew Moz.com had recently ranked).
(4) Google Webmaster Tools
After Google made data available for August 7th, we exported average position data from GWT (via “Search Traffic� > “Search Queries�) for that day, filtering to just “Web� and “United States�, since those were the parameters of the other methods. While the other methods represent a single data point, GWT “Avg. position� theoretically represents multiple data points. Unfortunately, there is very little transparency about precisely how this data is measured.
Once the GWT data was exported and compared to the full list, there were 206 queries left with data from all four rank-tracking methods. All but a handful of the dropped keywords were due to missing data in GWT’s one-day report. Our analyses were conducted on this set of 206 queries with full data.
Results: Correlations
To compare the four ranking methods, we started with the pair-wise Spearman rank-order correlations (hat tip to my colleague, Dr. Matt Peters, for his assistance on this and the following analysis). All correlations were significant at the p<0.01* level, and r-values are shown in the table below:
*Given that the ranking methods are analogous to a repeated analysis of the same data set, we applied the Bonferroni correction to all p-values.
Interestingly, almost all of the methods showed very strong agreement, with Personalized vs. Incognito showing the most agreement (not surprisingly, as both are browser-based). Here’s a scatterplot of that data, plotted on log-log axes (done only for visualization’s sake, since the rankings were grouped pretty tightly at the upper spots):
Crawler vs. GWT had the lowest correlation, but it’s important to note that none of these differences were large enough to make a strong distinction between them. Here’s the scatterplot of that correlation, which is still very high/positive by most reasonable standards:
Since the GWT “Average� data is precise to one decimal point, there’s more variation in the Y-values, but the linear relationship remains very clear. Many of the keywords in this data set had #1 rankings in GWT, which certainly helped boost the correlations, but the differences in the methods appear to be surprisingly low.
If you’re new to correlation and r-values, check out my quick refresher: the correlation “mathographic”. The statement “p<0.01” means that there is less than a 1% probability that these r-values were the result of random chance. In other words, we can be 99% sure that there was some correlation in play (and it wasn’t zero). This doesn’t tell us how meaningful the correlation is. In this particular case, we’re just comparing sets of data to see how similar they are – we’re not making any statements about causation.
Results: Agreement
One problem with the pair-wise correlations is that we can only compare any one method to another. In addition, there’s a certain amount of dependence between the methods, so it’s hard to determine what a “strong� correlation is. During a smaller, pilot study, we decided that what we’re really interested in is how any given method compares to the totality of the other three methods. In other words, which method agrees or disagrees the most with the rest of the methods?
With the help of Dr. Peters, I created a metric of agreement (or, more accurately, disagreement). I’ll save the full details for Appendix A at the end of this article, but here’s a short version. Let’s say that the four methods return the following rankings (keeping in mind that GWT is an average):
- 2
- 1
- 1
- 2.8
Our disagreement metric produces the following values for each of the methods:
- 2.89
- 2.34
- 2.34
- 3.58
Since the two #1 rankings show the most agreement, methods (2) and (3) have the same score, with method (1) showing more disagreement and (4) showing the most disagreement. The greater the distance between the rankings, the higher the disagreement score, but any rankings that match will have the same score for any given keyword.
This yielded a disagreement score for each of the four methods for each of the 206 queries. We then took the mean disagreement score for each method, and got the following results:
- Personal = 1.12
- Incognito = 0.82
- Crawler = 0.98
- GWT = 1.26
GWT showed the highest average disagreement from the other methods, with incognito rankings coming in on the low end. On the surface, this suggests that, across the entire set of methods, GWT disagreed with the other three methods the most often.
Given that we’ve invented this disagreement metric, though, it’s important to ask if this difference is statistically significant. This data proved not to be normally distributed (a chunk of disagreement=0 data points skewed it to one side), so we decided our best bet for comparison was the non-parametric Mann-Whitney U Test.
Comparing the disagreement data for each pair of methods, the only difference that approached statistical significance was Incognito vs. GWT (p=0.022). Since I generally try to keep the bar high (p<0.01), I have to play by my own rules and say that the disagreement scores were too close to call. Our data cannot reliably tell the levels of disagreement apart at this point.
Results: Outliers
Even if the statistics told us that one method clearly disagreed more than the other methods, it still wouldn’t answer one very important question – which method is right? Is it possible, for example, that Google Webmaster Tools could disagree with all of the other methods, and still be the correct one? Yes, it’s within the realm of possibility.
No statistic will tell us which method is correct if we fundamentally distrust all of the methods (and I do, at least to a point), so our next best bet is to dig into some of the specific cases of disagreement and try to sort out what’s happening. Let’s look at a few cases of large-scale disagreement, trying not to bias toward any particular method.
Case 1 – Personalization Boost
Many of the cases where personalization disagreed are what you’d expect – Moz.com was boosted in my personalized results. For example, a search for “seo checklist� had Moz.com at #3 in my logged-in results, but #7 for both incognito and crawled, and an average of 6.7 for GWT (which is consistent with the #7 ballpark). Even by just clicking personalization off, Moz.com dropped to #4, and in a logged out browser a few days after the original data collection, it was at #5.
What’s fascinating to me is that personalization didn’t disagree even more often. Consider that all of these queries were searches that generated traffic for Moz.com and I’m on the site every day and very active in the SEO community. If personalization has the impact we seem to believe it has, I would theorize that personalized searches would disagree the most with other methods. It’s interesting that that wasn’t the case. While personalization can have a huge impact on some queries, the number of searches it affects still seems to be limited.
Case 2 – Personalization Penalty
In some cases, personalization actually produced lower rankings. For example, a search for “what is an analyst� showed Moz.com at the #12 position for both personalized and incognito searches. Meanwhile, crawled rankings put us at #3, and GWT’s average ranking was #5. Checking back (semi-manually), I now see us at #10 on personalized search and up to #2 for crawled rankings.
Why would this happen? Both searches (personalized vs. crawled) show a definition box for “analyst� at the top, which could indicate some kind of re-ranking in play, but the top 10 after that box differ by quite a bit. One would naturally assume that Moz.com would get a boost in any of my personalized searches, but that’s simply not the case. The situation is much more complex and real-time than we generally believe.
Case 3 – GWT (Ok, Google) Hates Us
Here’s one where GWT seems to be out of whack. In our one-day data collection, a search for “seo� showed Moz at #3 for personalized rankings and #4 for incognito and crawled. Meanwhile, GWT had us down in the #6 spot. It’s not a massive difference, but for such an important head keyword, it definitely could lead to some soul-searching.
As of this writing, I was showing Moz.com in the #4 spot, so I called in some help via social media. I asked people to do a logged-in (personalized) search for “seo� and report back where they found Moz.com. I removed data from non-US participants, which left 63 rankings (36 from Twitter, and 27 from Facebook). The reported rankings ranged from #3 to #8, with an average of 4.11. These rankings were reported from across the US, and only two participants reported rankings at #6 or below. Here’s the breakdown of the raw data:
You can see the clear bias toward the #4 position across the social data. You could argue that, since many of my friends are SEOs, we all have similarly biased rankings, but this quickly leads to speculation. Saying that GWT numbers don’t match because of personalization is a bit like saying that the universe must be made of dark matter just because the numbers don’t add up without it. In the end, that may be true, but we still need the evidence.
Face Validity
Ultimately, this is my concern – when GWT’s numbers disagree, we’re left with an argument that basically boils down to “Just trust us.� This is difficult for many SEOs, given what feels like a concerted effort by Google to remove critical data from our view. On the one hand, we know that personalization, localization, etc. can skew our individual viewpoints (and that browser-based rankings are unreliable). On the other hand, if 56 out of 63 people (89%) all see my site at #3 or #4 for a critical head term and Google says the “average� is #6, that’s a hard pill to swallow with no transparency around where Google’s number is coming from.
In measurement, we call this “face validity�. If something doesn’t look right on the surface, we generally want more proof to sort out why, and that’s usually a reasonable instinct. Ultimately, Google’s numbers may be correct – it’s hard to prove they’re not. The problem is that we know almost nothing about how they’re measured. How does Google count local and vertical results, for example? What/who are they averaging? Is this a sample, and if so, how big of a sample and how representative? Is data from [not provided] keywords included in the mix?
Without these answers, we tend to trust what we can see, and while we may be wrong, it’s hard to argue that we shouldn’t. What’s more, it’s nearly impossible to convince our clients and bosses to trust a number they can’t see, right or wrong.
Conclusions
The “good� news, if we’re being optimistic, is that the four methods we considered in this study (Personalized, Incognito, Crawler, and GWT) really didn’t differ that much from each other. They all have their potential faults, but in most cases they’ll give you an answer that’s in the ballpark of reality. If you focus on relative change over time and not absolute numbers, then all four methods have some value, as long as you’re consistent.
Over time, this situation may change. Even now, none of these methods measure anything beyond core organic ranking. They don’t incorporate local results, they don’t indicate if there are prominent SERP features (like Answer Boxes or Knowledge Graph entries), they don’t tell us anything about click-through or traffic, and they all suffer from the little white lie of assumed linearity. In other words, we draw #1 – #10, etc. on a straight line, even though we know that click-through and impact drop dramatically after the first couple of ranking positions.
In the end, we need to broaden our view of rankings and visibility, regardless of which measurement method we use, and we need to keep our eyes open. In the meantime, the method itself probably isn’t critically important for most keywords, as long as we’re consistent and transparent about the limitations. When in doubt, consider getting data from multiple sources, and don’t put too much faith in any one number.
Appendix A: Measuring Disagreement
During a pilot study, we realized that, in addition to pair-wise comparisons of any two methods, what we really wanted to know was how any one method compared to the rest of the methods. In other words, which methods agreed (or disagreed) the most with the set of methods as a whole? We invented a fairly simple metric based on the sum of the differences between each of the methods. Let’s take the example from the post – here, the four methods returned the following rankings (for Keyword X):
- 2
- 1
- 1
- 2.8
We wanted to reward methods (2) and (3) for being the most similar (it doesn’t matter that they showed Keyword X in the #1 position, just that they agreed), and slightly penalize (1) and (4) for mismatching. After testing a few options, we settled (I say “we”, but I take full blame for this particular nonsense) on calculating the sum of the square roots of the absolute differences between each method and the other three methods.
That sounds a lot more complicated than it actually is. Let’s calculate the disagreement score for method 1, which we’ll call “M1” (likewise, we’ll call the other methods M2, M3, and M4). I call it a “disagreement” score because larger values ended up representing lower agreement. For M1 for Keyword X, the disagreement score is calculated by:
sqrt(abs(M1-M2)) + sqrt(abs(M1-M3)) + sqrt(abs(M1-M4))
The absolute value is used because we don’t care about the direction of the difference, and the square root is essentially a dampening function. I didn’t want outliers to be overemphasized, or one bad data point for one method could potentially skew the results. For Method 1 (M1), then, the disagreement value is:
sqrt(abs(2-1)) + sqrt(abs(2-1)) + sqrt(abs(2-2.8))
…which works out to 2.89. Here are the values for all four methods:
- 2.89
- 2.34
- 2.34
- 3.58
Let’s look at a couple of more examples, just so that you don’t have to take my word for how this works. In this second case, two methods still agree, but the ranking positions are “lower” (which equates to larger numbers), as follows:
- 12
- 12
- 3
- 5
The disagreement metric yields the following values:
- 5.65
- 5.65
- 7.41
- 6.71
M1 and M2 are in agreement, so they have the same disagreement value, but all four values are elevated a bit to show that the overall distance across the four methods is fairly large. Finally, here’s an example where two methods each agree with one other method:
- 2
- 2
- 5
- 5
In this case, all four methods have the same disagreement score:
- 3.46
- 3.46
- 3.46
- 3.46
Again, we don’t care very much that two methods ranked Keyword X at #2 and two at #5 – we only care that each method agreed with one other method. So, in this case, all four methods are equally in agreement, when you consider the entire set of rank-tracking methods. If the difference between the two pairs of methods was larger, the disagreement score would increase, but all four methods would still share that score.
Finally, for each method, we took the mean disagreement score across the 206 keywords with full ranking data. This yielded a disagreement measurement for each method. Again, these measurements turned out not to differ by a statistically significant margin, but I’ve presented the details here for transparency and, hopefully, for refinement and replication by other people down the road.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
SearchCap: The Day In Search, September 4, 2013
Below is what happened in search today, as reported on Search Engine Land and from other places across the web. From Search Engine Land: Google May Be Updating Their Algorithm But They Won’t Confirm It Today, September 4th, and last week, August 21st, there were huge spikes in chatter amongst…
Please visit Search Engine Land for the full article.
Google Names Next Android Operating System KitKat
The Android 4.4 operating system will be called KitKat. A deal with Nestlé, which owns the Kit Kat name, came about after a decision by Google to take a break from the norm and name its latest OS after a candy bar. No money changed hands.
Brilliant example of storytelling…without even talking about the product
OK, gotta hand it ot the wonks at the agency LG hied for this campaign. It’s epic.
read more
Dmoz and Yellowpages as Examples of Unnatural Links in GWT Request Replies
@Marie_Haynes is doing a good job monitoring “an example unnatural link”.
read more
Topsy Has Indexed Your Tweets – every single one of them…ever.
Topsy, a social search engine, states they’ve managed to index ever tweet ever made…
read more
Google May Be Updating Their Algorithm But They Won’t Confirm It
Today, September 4th and last week, August 21st, there were huge spikes in chatter amongst the SEO and webmasters about the Google search results shifting and changing. In short, many SEOs and webmasters were complaining that their rankings in the Goog…
Jim Boykin Interview
Jim Boykin has been a longtime friend & was one of the early SEOs who was ahead of the game back in the day. While many people have came and went, Jim remains as relevant as ever today. We interviewed him about SEO, including scaling his company, disavow & how Google has changed the landscape over the past couple years.
Aaron: How did you get into the field of SEO?
Jim: In 1999 I started We Build Pages as a one man show designing and marketing websites…I never really became much of a designer, but luckily I had much more success in the marketing side. Somehow that little one man show grew to about 100 ninjas, and includes some communities and forums I grew up on (WebmasterWorld, SEOChat, Cre8asiteForums), and I get to work with people like Kris Jones, Ann Smarty, Chris Boggs, Joe Hall, Kim Krause Berg, and so many others at Ninjas who aren’t as famous but are just as valuable to me, and Ninjas has really become a family over the years. I still wonder at times how this all happened, but I feel lucky with where we’re at.
Aaron: When I got started in SEO some folks considered all link building to be spam. I looked at what worked, and it appeared to be link building. Whenever I thought I came up with a new clever way to hound for links & would hunted around, most the times it seems you got there first. Who were some of the people you looked to for ideas when you first got into SEO?
Jim: Well, I remember going to my first SEO conference in 2002 and meeting people like Danny Sullivan, Jill Whalen, and Bruce Clay. I also remember Bob Massa being the first person “dinged” by google for selling links…that was back in 2002 I think…I grew up on Webmasterworld and I learned a ton from the people in there like: Tedster, Todd Friesen, Greg Boser, Brett Tabke, Shak, Bill, Rae Hoffman, Roger Montti, and so many others in there over the years…they were some of my first influencers….I also used to hang around with Morgan Carey, and Patrick Gavin a lot too. Then this guy selling an SEO Book kept showing up on all my high PR pages where I was getting my links….hehe…
Aaron: One of the phrases in search that engineers may use is “in an ideal world…”. There is always some amount of gap between what is advocated & what actually works. With all the algorithmic changes that have happened in the past few years, how would you describe that “gap” between what works & what is advocated?
Jim: I feel there’s really been a tipping point with the Google Penguin updates. Maybe it should be “What works best short term” and “What works best long term”….anything that is not natural may work great in the short term, but your odds of getting zinged by Google go way up. If you’re doing “natural things” to get citations and links, then it may tend to take a bit longer to see results (in conjunction with all you’re doing), but at least you can sleep at night doing natural things (and not worrying about Google Penalties). It’s not like years ago when getting exact targeted anchor text for the phrases you want to rank on was the way to go if you wanted to compete for search rankings. Today it’s much more involved to send natural signals to a clients website. To send in natural signals you must do things like work up the brand signals, trusted citations, return visitors, good user experience, community, authors, social, yada yada….SEO is becming less a “link thing”…and more a “great signals from many trusted people”, as well as it’s a branding game now. I really like how SEO is evolving….for years Google used to say things like “Think of the users” when talking of the algorthym, but we all laughed and said “Yea, yea, we all know that it’s all about the Backlinks”….but today, I think Google has crossed a tipping point where yes, to do great SEO, you must focus on the users, and not the links….the best SEO is getting as many citations and trusted signals to your site than your competitors…and there’s a lot of trusted signals which we, as internet marketers, can be working on….it’s more complicated, and some SEO’s won’t survive this game…they’ll continue to aim for short term gains on short tail keyword phrases…and they’ll do things in bulk….and their network will be filtered, and possibly penalized.
Every website owner has to measure the risks, and the time involved, and the expected ROI….it’s not a cheap game any more….doing real marketing involves brains and not buttons…if you can’t invest in really building something “special” (ideally many special things), on your site to get signals (links/social), then you’re going to find it pretty hard to get links that look natural and don’t run a risk of getting penalized. The SEO game has really matured, the other option is to take a high risk of penalization.
Aaron: In terms of disavow, how deep does one has to cut there?
Jim: as deep as it needs to be to remove every unantural link. If you have 1000 backlinks and 900 are on pages that were created for “unnatural purposes (to give links)” then all 900 have to be disavowed…if you have 1000 backlinks, and only 100 are not “natural” then only 100 need to be disavowed… what percent has to be disavowed to untrip an algorthymitic filter? I’m not sure…but almost always the links which I disavow have zero value (in my opinion) anyways. Rip the band-aid off, get over it, take your marketing department and start doing real things to attract attention, and to keep it.
Aaron: In terms of recoveries, are most penalized sites “recoverable”? What does the typical recovery period look like in terms of duration & restoration?
Jim: oh…this is a bee’s nest you’re asking me….. are sites recoverable….yes, most….if a site has 1000 domains that link to it, and 900 of those are artificial and I disavow them, there might not be much of a recovery depending on what that 100 links left are….ie, if I disavow all link text of “green widgets” that goes to your site, and you used to rank #1 for “green widgets” prior to being hit by a Penguin update, then I wouldn’t expect to “recover” on the first page for that phrase….. where you recover seems to depend on “what do you have for natural links that are left after the disavow?”….the time period….well…. we’ve seen some partial recoveries in as soon as 1 month, and some 3 months after the disavow…and some we’re still waiting on….
To explain, Google says that when you add links to the disavow document, then way it works is that the next time Google crawls any page that links to you, they will assign a “no follow” to the link at that time…..so you have to wait until enough of the links have been recrawled, and now assigned the no follow, to untrip the filter….but one of the big problems I see is that many of the pages Google shows as linking to you, well, they’re not cached in Google!….I see some really spammy pages where Google was there (they record your link), but it’s like Google has tossed the page out of the index even though they show the page as linking to you…so I have to ask myself, when will Google return to those pages?…will Google ever return to those pages??? It looks like if you had a ton of backlinks that were on pages that were so bad in the eyes of Google that they don’t even show those pages in their index anymore…we might be waiting a long long time for google to return to those pages to crawl them again….unless you do something to get Google to go back to those pages sooner (I won’t elaborate on that one).
Aaron: I notice you launched a link disavow tool & earlier tonight you were showing me a few other cool private tools you have for working on disavow analysis, are you going to make any of those other tools live to the public?
Jim: Well, we have about 12 internal private disavow analysis tools, and only 1 public disavow tool….we are looking to have a few more public tools for analyzing links for disavow analysis in the coming weeks, and in a few months we’ll release our Ultimate Disavow Tool…but for the moment, they’re not ready for the public, some of those are fairly expensive to run and very database intensive…but I’m pretty sure I’m looking at more link patterns than anyone else in the world when I’m analyzing backlinks for doing disavows. When I’m tired of doing disavows maybe I’ll sell access to some of these.
Aaron: Do you see Google folding in the aggregate disavow data at some point? How might they use it?
Jim: um…..I guess if 50,000 disavow documents have spammywebsite.com listed in their disavows, then Google could consider that spammywebsite.com might be a spammy website…..but then again, with people disavowing links who don’t know what they’re doing, I’m sure their’s a ton of great sites getting listed in Disavow documents in Webmaster Tools.
Aaron: When approaching link building after recovering from a penalty, how does the approach differ from link building for a site that has never been penalized?
Jim: it doesn’t really matter….unless you were getting unnatural/artificial links or things in bulk in the past, then, yes, you have to stop doing that now…that game is over if you’ve been hit…that game is over even if you haven’t been hit….Stop doing the artificial link building stuff. Get real citations from real people (and often “by accident”) and you should be ok.
Aaron: You mentioned “natural” links. Recently Google has hinted that infographics, press releases & other sorts of links should use nofollow by default. Does Google aim to take some “natural” link sources off the table after they are widely used? Or are those links they never really wanted to count anyhow (and perhaps sometimes didn’t) & they are just now reflecting that.
Jim: I think ~most of these didn’t count for years anyways….but it’s been impossible for Google to nail every directory, or every article syndication site, or every Press Release site, or everything that people can do in bulk..and it’s harder to get all occurances of widgets and mentions of infographics…so it’s probably just a “Google Scare….ie, Google says, “Don’t do it, No Follow them” (and I think they say that because it often works), and the less of a pattern there is, the harder for Google to catch it (ie, widgets and infographics) …I think too much of any 1 thing (be it a “type of link”) can be a bad signal….as well as things like “too many links from pages that get no traffic”, or “no clicks from links to your site”. In most cases, because of keyword abuse, Google doesn’t want to count them…links like this may be fine (and ok to follow) in moderation…but if you have 1000 widgets links, and they all have commercial keywords as link text, then you’re treading on what could certainly turn into a negative signal, and so then you might want to consider no following those.
Aaron: There is a bit of a paradox in terms of scaling effective quality SEO services for clients while doing things that are not seen as scalable (and thus future friendly & effective). Can you discuss some of the biggest challenges you faced when scaling IMN? How were you able to scale to your current size without watering things down the way that most larger SEO companies do?
Jim: Scaling and keep quality has certainly been a challenge in the past. I know that scaling content was an issue for us for a while….how can you scale quality content?….Well, we’ve found that by connecting real people, the real writers, the people with real social influence…and by taking these people and connecting them to the brands we work with…..so these real people then become “Brand Evangelist”…and getting these real people who know what they’re talking about to then write for our clients, well, when we did that we found that we could scale the content issue. We can scale things like link building by merging with the other “mentions”, and specifically targeting industries and people and working on building up associations and relations with others has helped to scale…plus we’re always building tools to help us scale while keeping quality. It’s always a challenge, but we’ve been pretty good at solving many of those issues.
I think we’ve been really good at scaling in house….many content marketers are now more like community managers and content managers….we’ve been close to 100 employees for a few years now..so it’s more how can we do more with the existing people we have…and we’ve been able to do that by connecting real people to the clients so we can actually have better content and better marketing around that content….I’m really happy that the # of employees has been roughly the same for past few years, but we’re doing more business, and the quality keeps getting better….there’s not as many content marketers today as there was a few years ago, but there’s many more people working on helping authors build up their authorship value and produce more “great marketing” campaigns where as a bi-product, we happen to get some links and social citations.
Aaron: One of the things I noticed with your site over the past couple years is the sales copy has promoted the fusion of branding and SEO. I looked at your old site in Archive.org over the years & have seen quite an amazing shift in terms of sales approach. Has Google squeezed out most of the smaller players for good & does effective sustainable SEO typically require working for larger trusted entities? How would you contrast approach/strategy in working with bigger and smaller clients?When I first got into SEO about 80%+ of the hands in the audiences at conferences were smaller independent players. At the last conference I was at it seemed that about 80% of the hands in the audience worked for big companies (or provided services to big companies). Is this shift in the market irreversible? How would you compare/contrast approach in working with smaller & larger clients?
Jim: Today it’s down to “Who really can afford to invest in their Brand?” and “Who can do real things to get real citations from the web?”….and who can think way beyond “links”…if you can’t do those things, then you can’t have an effective sustainable online marketing program…. we once were a “link building company” for many, many years…. but for the past 3 years we’ve moved into full service, offering way more than what was “link building services”…. yea, SEO was about “links” for years, and it still is to a large degree….but unless you want to get penalized, you have to take the “it’s way more than links” approach… in order for SEO to work (w/o fear of getting penalized) today, you have to look at sending in natural signals…so thus, you must do “natural” things…things that will get others “talking” about it, and about you….SEO has evolved a lot over the years….Google used to recommend 1 thing (create a great site and create great things), but for years we all knew that SEO was about links and anchor text….today, …today, I think Google has caught up with (to some degree) with the user, and with “real signals”…yesterday is was “gaming” the system….today it’s about doing real things…real marketing…and getting you name out to the community via creating great things that spread, and that get people to come back to your site….those SEO’s and businesses who don’t realize that the game has changed, will probably be doing a lot of disavowing at some time in the future, and many SEO’s will be out of business if they think it’s a game where you can do “fake things” to “get links” in bulk….in a few years we’ll see who’s still around for internet marketing companies…those who are still around will be those who do real marketing using real people and promoting to other real people…the link game itself has changes…in the past we looked a link graphs…today we look at people graphs….who is talking about you, what are they saying….it’s way more than “who links to me, and how do they link to me”….Google is turning it into a “everyone gets a vote”, and “everyone has a value”…and in order to rank, you’ll need real people of value talking about your site…and you’ll need a great user experience when they get there, and you’ll need loyal people who continue to return to your site, and you’ll need to continue to do great things that get mentions….
SEO is no longer a game of some linking algorithm, it’s now really a game of “how can you create a great user experience and get a buzz around your pages and brand”.
Aaron: With as much as SEO has changed over the years, it is easy to get tripped up at some point, particularly if one is primarily focused on the short term. One of the more impressive bits about you is that I don’t think I’ve ever seen you unhappy. The “I’m feeling lucky” bit seems to be more than just a motto. How do you manage to maintain that worldview no matter what’s changing & how things are going?
Jim: Well, I don’t always feel lucky…I know in 2008 when Google hit a few of our clients because we were buying links for them I didn’t feel lucky (though the day before, when they ranked #1, I felt lucky)….but I’m in this industry for the long term…I’ve been doing this for almost 15 years….and yes, we’ve had to constantly change over the year, and continue to grow, and growing isn’t always easy…but it is exciting to me, and I do feel lucky for what I have…I have a job I love, I get to work with people whom I love, in an industry I love, I get to travel around the world and meet wonderful people and see cool places…and employee 100 people and win “Best Places to work” awards, and I’m able to give back to the community and to society, and to the earth…those things make me feel lucky…SEO has always been like a fun game of chess to me…I’m trying to do the best I can with any move, but I’m also trying to think a few steps ahead, and trying to think what Google is thinking on the other side of the table…..ok…yea, I do feel lucky….maybe it’s the old hippy in me…I always see the glass half full, and I’m always dreaming of a better tomorrow….
If I can have lots of happy clients, and happy employees, and do things to make the world a little better along the way, then I’m happy…sometimes I’m a little stressed, but that comes with life….in the end, there’s nothing I’d rather be doing than what I currently do….and I always have big dreams of tomorrow that always make the trials of today seem worth it for the goals of what I want to achieve for tomorrow.
Aaron: Thanks Jim!
Jim Boykin is the CEO of the Internet Marketing Ninjas company, and a Blogger and public speaker. You can find Jim on Twitter, Facebook, and Google Plus.
Facebook Hashtags Not Yielding Added Exposure for Brands [Study]
Ever since Facebook hashtags launched, marketers have been trying to determine ROI. So far, reports have shown that hashtags may not be performing up to par. In fact, a new analysis shows engagement and viral reach is down on posts with hashtags.
Bing Video Adds More Filters, Navigation & Pop Overlays
Bing announced some upgrades to Bing Video search. The upgrades include additional search filters, larger video thumbnails, pop-out hover previews and video overlay and improved navigation. New filters – Allow people to easily filter through video content categories by length, date, resolution and…
Please visit Search Engine Land for the full article.
Twitter updates UI to show connections in conversations
Just noticed twitter it now shows a blue line in between people you follower whom are having conversion.read more
Topsy Becomes Definitive Twitter Search Engine
Social search and analytics provider Topsy announced that users can now search its index for every single tweet ever published since Twitter’s inception in 2006. This capability is very useful and likely makes Topsy an acquisition target (probably by Twitter). According to the site, there…
Please visit Search Engine Land for the full article.