Make it as easy as possible for users to go from general content to the more specific content they want on your site. Add navigation pages when it makes sense and effectively work these into your internal link structure. Make sure all of the pages on your site are reachable through links, and that they don't require an internal "search" functionality to be found. Link to related pages, where appropriate, to allow users to discover similar content.
Keep resources crawlable. Blocking page resources can give Google an incomplete picture of your website. This often happens when your robots.txt file is blocking access to some or all of your page resources. If Googlebot doesn't have access to a page's resources, such as CSS, JavaScript, or images, we may not detect that it's built to display and work well on a mobile browser. In other words, we may not detect that the page is "mobile-friendly," and therefore not properly serve it to mobile searchers.
There's a lot to learn when it comes to the internet marketing field in general, and the digital ether of the web is a crowded space filled with one know-it-all after another that wants to sell you the dream. However, what many people fail to do at the start, and something that Sharpe learned along the way, is to actually understand what's going on out there in the digital world and how businesses and e-commerce works in general, before diving in headfirst.

Establishment of customer exclusivity: A list of customers and customer's details should be kept on a database for follow up and selected customers can be sent selected offers and promotions of deals related to the customer's previous buyer behaviour. This is effective in digital marketing as it allows organisations to build up loyalty over email.[22]
When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability, d, is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows:
I just did a consult and opinion letter for an extremely large 200,000+ page corporate website that had been forced to temporarily remove their html sitemap due to some compromised code that overloaded their server and crashed the site. A number of individuals at the company were concerned at the potential, negative SEO implications of removing this page, loss of page rank equity transfer to sitemap targets and a feeling that this page was providing the robots with important pathways to many of the orphan pages unavailable through the menu system. This article was helpful in debunking the feeling that a page with 200,000 links off of it was passing any link juice to the targets. PS. XML sitemap in place.

This is so funny. Google stifled the notion of linking to “great content” the minute they let on to how important linking was to passing pagerank. In effect, the importance of links has indeed led to pagerank hoarding and link commoditization which in turn leads to all of the things google doesn’t like such as spammy links, link farms, link selling, link buying, etc. What you end up with is a system, much like our economic system, where the rich get richer and poor get poorer. Nobody has a problem linking to CNN, as if they really needed the links. On the flip side who wants make a dofollow link to a site that’s 2 days old, great content or not when you can provide your visitors a nofollow link which is just as valuable to them. The whole notion of benefiting from a quality outbound link is a joke, the outbound linker receives 0 benefit when you factor the outflow of pagerank.
I think it is important you distinguish your advice about no-following INTERNAL links and no-following EXTERNAL links for user-generated content. Most popular UGC-heavy sites have no-followed links as they can’t possibly police them editorially & want to give some indication to the search engines that the links haven’t been editorially approved, but still might provide some user benefit.
The issue being, this change makes it a bad idea to nofollow ANY internal link as any internal page is bound to have a menu of internal links on it, thus keeping the PR flowing, (as opposed to nofollow making it evaporate). So no matter how useless the page is to search engines, nofollowing it will hurt you. Many many webmasters either use robots.txt or noindex to block useless pages generated by ecommerce or forum applications, if this change applies to those methods as well it’d be really great to know, so we can stop sending a significant amount of weight into the abyss.
While ordinary users were not that interested in pages' scores, SEOs of a different caliber felt that this was a great opportunity to make a difference for their customers. This obsession of SEOs with PageRank made everyone feel that this ranking signal is more or less the only important one. In spite of the fact that pages with a lower PR score can beat those with a higher score! What did we receive then, as a result?

Page Rank

×