“With 150 million pages, the Web had 1.7 billion edges (links).” Kevin Heisler, that ratio holds true pretty well as the web gets bigger. A good rule of thumb is that the number of links is about 10x the number of pages. I agree that it’s pretty tragic that Rajeev Motwani was a co-author of many of those early papers. I got to talk to Rajeev a little bit at Google, and he was a truly decent and generous man. What has heartened me is to see all the people that he helped, and to see those people pay their respects online. No worries on the Consumer WebWatch–I’m a big fan of Consumer WebWatch, and somehow I just missed their blog. I just want to reiterate that even though this feels like a huge change to a certain segment of SEOs, in practical terms this change really doesn’t affect rankings very much at all.
I really think Google keeps it’s enemies sweet and close, all the while gathering info on ALL SEO tactics, so they can compare and discount them where warranted. Put yourself in Google’s shoes. It relies on returning the most trustworthy and relevant pages in the SERPs for any given search term. That IS the all important foundations of the Google empire. IF that can be artificially manufactured by SEO and money, Google has lost, not only the battle, but the war.
Ah – well the Reasonable Surfer is a different patent (and therefore a different algorithm) to PageRank. I would imagine that initially, only the first link counted – simply because there either IS or IS NOT a relationship between the two nodes. This mean it was a binary choice. However, at Majestic we certainly think about two links between page A and Page B with separate anchor texts… in this case in a binary choice, either the data on the second link would need to be dropped or, the number of backlinks can start to get bloated. I wrote about this on Moz way back in 2011!
Quality content is more likely to get shared. By staying away from creating "thin" content and focusing more on content that cites sources, is lengthy and it reaches unique insights, you'll be able to gain Google's trust over time. Remember, this happens as a component of time. Google knows you can't just go out there and create massive amounts of content in a few days. If you try to spin content or duplicate it in any fashion, you'll suffer a Google penalty and your visibility will be stifled.
A: I pretty much let PageRank flow freely throughout my site, and I’d recommend that you do the same. I don’t add nofollow on my category or my archive pages. The only place I deliberately add a nofollow is on the link to my feed, because it’s not super-helpful to have RSS/Atom feeds in web search results. Even that’s not strictly necessary, because Google and other search engines do a good job of distinguishing feeds from regular web pages.
Journalists and writers are always on the lookout for experts to contribute quotes for their articles. Some (but not all) will include backlinks to their sources’ websites. Getting quotes in media outlets is a great way to not only get backlinks, but also build credibility within your industry. Even in instances where you don't get backlinks, this profile page for PMM's CEO Josh Rubin is a good example of how you can showcase your media appearances - something which both Google and your clients value when it comes to evaluating your authority.
Recently being torched for aggressive linking to keep up with competitors and doing things like PR sculpting because they were too. This is very helpful information. We have been undoing as much as we can and removing all the internal no follows was one of the items we have done. We have also gone back to linking to useful sites for our users without the no follows.
However, if you're like the hundreds of millions of other individuals that are looking to become the next David Sharpe, there are some steps that you need to take. In my call with this renowned online marketer, I dove deep the a conversation that was submerged in the field of internet marketing, and worked to really understand what it takes to be top earner. We're not just talking about making a few hundred or thousand dollars to squeak by here; we're talking about building an automated cash machine. It's not easy by any means.
Write a description that would both inform and interest users if they saw your description meta tag as a snippet in a search result. While there's no minimal or maximal length for the text in a description meta tag, we recommend making sure that it's long enough to be fully shown in Search (note that users may see different sized snippets depending on how and where they search), and contains all the relevant information users would need to determine whether the page will be useful and relevant to them.
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[46]
Consumers today are driven by the experience. This shift from selling products to selling an experience requires a connection with customers on a deeper level, at every digital touch point. TheeDigital’s internet marketing professionals work to enhance the customer experience, grow your online presence, generate high-quality leads, and solve your business-level challenges through innovative, creative, and tactful internet marketing.
Adjusting how Google treats nofollows is clearly a major shift (as the frenzy in the SEO community has demonstrated). So, if Google were to adjust how they treat nofollows they would need to phase it in gradually. I believe this latest (whether in 2008 or 2009) change is simply a move in the direction of greater changes to come regarding nofollow. It is the logical first step.
Cause if I do that, If I write good content, whilst my 100+ competitors link build, article market, forum comment, social bookmark, release viral videos, buy links, I’ll end up the very bottom of the pile, great content or not and really I am just as well taking my chances pulling off every sneaky trick in the book to get my site top because, everyone does it anyway and if I don’t what do have to lose?”

Google's founders, in their original paper,[18] reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear in {\displaystyle \log n} , where n is the size of the network.
2. Was there really a need to make this change? I know all sites should be equally capable of being listed in search engines without esoteric methods playing a part. But does this really happen anyway (in search engines or life in general)? If you hire the best accountant you will probably pay less tax than the other guy. Is that really fair? Also, if nobody noticed the change for a year (I did have an inkling, but was totally and completely in denial) then does that mean the change didn’t have to be made in the first place? As said, we now have a situation where people will probably make bigger and more damaging changes to their site and structure, rather than add a little ‘nofollow’ to a few links.

I think it is important you distinguish your advice about no-following INTERNAL links and no-following EXTERNAL links for user-generated content. Most popular UGC-heavy sites have no-followed links as they can’t possibly police them editorially & want to give some indication to the search engines that the links haven’t been editorially approved, but still might provide some user benefit.

Matt Cutts, it’s Shawn Hill from Longview, Texas and I’ve got to say, “you’re a semseo guru”. That’s obviously why Google retained you as they did. Very informative post! As head of Google’s Webspam team how to you intend to combat Social Networking Spam (SNS)? It’s becoming an increasingly obvious problem in SERPs. I’m thinking Blogspam should be the least of Google’s worries. What’s your take?