Duplicate content has been a confusing point for most SEO’s. We’ve written on it before, but today, we’ll be talking about how to avoid being your own worst enemy.

See, when people think of duplicate content, they usually think of someone scraping their content and posting it all over the web. Or they think of article marketing, where you submit the same article to thousands of sites (hint: this doesn’t work anymore). 

The reality is, as SEO’s, we’re often our own worst enemies when it comes to duplicate content. And most of us don’t even know it!

How Does Duplicate Content Work?

One of SEO’s great mysteries: how does Google treat duplicate content? Do they ignore it? Penalize it? If they penalize it, how do they determine who created the content in the first place?

Now, when most people think of duplicate content, they get worried about SEO’s with malicious intent. SEO’s who are targeting their websites and specifically trying to bring them down. As we’ve mentioned before, the odds of this happening are very low. 

Here’s how Google deals with duplicate content. Obviously, they don’t want two of the same exact pages on page #1 of Google. So when Google encounters duplicate content, they only rank one of the pages. And they use authority as the determining factor. So if your website has more backlinks, age or social authority than a scraper site, your pages will rank and the scraper site’s won’t (and this is almost always the case).

What, scraper site?! You mean someone could take my content and rank above me?

In general, I’d say there’s very little reason to fear scrapers. If your website has any decent level of traffic, you’re going to get scraped. It’s not scrapers you should be worried about – it’s scrapers with malicious intent, specifically targeting your site to take down. But the odds of that happening are so low you really shouldn’t be worried about that either.

How are SEOs their Own Worst Enemies?

Given that Google will only rank one page, it’s clear why you should avoid duplicate content. If you have the same article all over the web, it creates a competition between each site you posted it on. It’s not cumulative, since only one total can rank.

It’s obvious that you should avoid duplicate content on your own site. Because if you have duplicate content on your own site, you’re competing with yourself. Google will only rank one of those pages, so any link juice you get to the non-ranked page isn’t going to count. You’ll be effectively splitting your potential link juice in half – except only half of it is going to count.

Great. That’s obvious though – does anyone really have duplicate content on their site?

Well, some do. So that’s the first thing you should check. If you have duplicate content on your own site, change it!

But that’s not the reason SEO’s are their own worst enemies. Tell me, are these two URL’s the same, or different?



At first glance, they seem like the same URL’s. After all, they both take you to the same place. But they’re not the same URL’s – “www” is actually a subdomain of your original domain. Yes, www and non-www URL’s count as two different pages.

And when you type in www or non-www, they both have the same content. Yes, this is duplicate content! The majority of internet marketers and aspiring SEO experts don’t know this. 

What’s the solution? Pick the URL you like and redirect all traffic to that specific URL. If you like the look of www (doesn’t matter which one), then you’ll want to 301 redirect all traffic that goes from non-www to www. 

301 redirects are permanent and tell Google to forward all the link juice to the new page. Essentially, you’re telling Google “page A is 301 redirected to page B, page B is now page A.”

Another issue that can cause duplicate content is “Printer Friendly” versions of a page. Multiple versions of the page can be indexed, and of course, only one will get ranked.

Session IDs are another cause of duplicated content.

Using the 301 redirects, you can combine the duplicated pages into one single page that the search engines can index, crawl and rank properly.

If you are looking for something simpler in design, try the rel=”canonical” tag. This tag can release the same amount of power for ranking with the search engines, and it takes much less time to implement, just put it in the HTML head.

Be careful when creating the tags and the pages. Capitalization counts. Look at the pages below, the simple changes can create duplication issues.

A search engine sees the pages as unique, simply due to the capitalization, so keep that in mind when creating Rel=canonical tags.

If you know you have pages that have duplicated content and you want to exclude them from the search engine’s rankings, use the noindex, no-follow tags. These meta robot tags tell a search engine not to index or crawl the pages, eliminating the duplicate content issue.

Other issues that surround duplicated content are more obvious, such as spinning articles, linking the same article to various sites, or blatantly stealing content from another site and calling it your own.

It may be difficult to avoid duplicated content throughout your site, especially if you have many pages that all deal with similar topics. You do have to try though, especially if you want to rank well in the search engines. Be as specific as you possibly can for each page of content. So, if you are selling pet products and have a page for dog flea collars and one for cat flea collars, write specifically for the pet instead of creating generic content that you simply change the word cat to dog.

Do Not Steal Content

If you feel like an article would be a great fit for your website, ask to use it, or at the very least, use it and then create a link back to the original content with the owner's name. If you are not sure how to do this, read the article from Moz on Dealing with Duplicate Content.

The latest Google updates have been focusing on content, not just originality, but quality as well. So, when you are taking the time to create content for your website, take the time to make it great. If you create compelling, informative, and top-quality content, Google and other search engines will treat you as an authority, and many other sites will link back to your content, giving you higher rankings.

It may feel impossible to create unique content for each of your pages, but in the end, it is worth the time and effort. Hire professional freelancers to write the content, but expect to pay more than a decade ago. The days of low-quality content that was stuffed with keywords and designed with grey hat SEO tactics are over. Today, Google only recognizes white hat SEO strategies with reward; any other strategy is faced with penalties.

Not every duplicated content issue is intentional. This means that even the smallest of mistakes in your programming can cause a huge decrease in your rankings.

Keep in mind that when you place content on your page that Google is reading, almost like a human reader would these days. The only difference between Google and a human reader at this point is the high level of technical advantages that Google possesses. You can expect more updates in the future that will increase the need for quality, original content on your site. 

Want More SEO Tips and Tricks?

If you liked this article, be sure to sign up for our mailing list (do so by filling out the form below). We’ll send you all the best SEO tips straight to your inbox!

We also have a Facebook page. Give us a “like” and you’ll be notified on your wall whenever we post something new.

Lastly, don’t forget to run a free on-page site analysis. If you’re not ranking on page #1, find out why today.