Scraper sites have been a problem since quite a long time now and even though Google has worked out many options which were helpful in reducing the effect of these scraper sites, it still persists now and is quite effectively affecting the rankings. Last week, the head of Google’s spam fighting group, Matt Cutts called for help via Twitter as he twitted:
Scrapers getting you down? Tell us about blog scrapers you see: http://goo.gl/S2hIh We need datapoints for testing.
Scraper sites on the whole have been a hassle for the search engine giant, but specifically those scraper sites are a worry who rank higher than even the original page. To help the cause improve, Matt twitted his tweet including a link which led to a Google Doc form. This form asks a user to detail their query exactly for any “scraping problem”. A user will have to post in the exact URLs of the scraper pages along with the originals from where the matter has been scrapped. The form has a point which details that Google “may use data you submit to test and improve our algorithms.”
Its not the first time Google has attempted calling out for help from the victims and users, but this one was noteworthy as the issue has been quite a persistent one in the last few months.
Google has always been fighting these scraper sites, but the critics of late have concentrated their focus on this issue many a times in a span of one year. This critic view has been growing which noted a decline in the quality of Google’s search results. Google’s reply was an expected one wherein it said that it disagrees with the talks that say that search results had grown worse. However, it did not fail to promise that the search engine giant had their processes in place to fight spam with “new efforts”. The post has an inclusion of a point which states:
And we’re evaluating multiple changes that should help drive spam levels even lower, including one change that primarily affects sites that copy others’ content and sites with low levels of original content.
This was followed by a blog post of Matt coming just in a week’s time, which read that “slightly over 2% of queries change in some way” after the update. He even wrote that “searchers are more likely to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content.”
Talking about the Panda update in February, things were expected to improve, but it had in fact a negative effect as the update was followed by webmasters flooding Google’s help forums stating that the condition had worsened. There was yet another update from Google for its Panda, Panda 2.2 which came out in mid-June. That time Matt had given a confirmation that scraper sites would be targetted by the update.
Yet again there was a statement from Google which read that they were “testing algorithmic changes for scraper sites (especially blog scrapers)”. Google has been quite vocal for this query which has been a headache for many, but an actual improved solution is not yet in for Google.