So I’m chatting with Alex, the guy I talked about here, about his synonymizer tool:
[12:36] alexf: Even better, if settings for the rewrite of each article could be randomly variated (between certain ranges) to avoid having all the articles rewritten with the same parameters.
[12:36] alexf: but why you need this?
[12:37] alexf: if you set good settings any text will be rewritten different
[12:37] Q2hl: i dont know, im a fan of random
[12:37] alexf: even if you trying to rewrite same article twice – second copy will be different form first
[12:38] Q2hl: yes but dont you think rewriting all articles with same amount of nouns, adje, etc. will end up sharing some common pattern
[12:38] alexf: no
[12:38] Q2hl: in the sentence structure
[12:39] alexf: no, not at all
[12:39] alexf: first of all it depends on the source of the articles, if they are from different sources, synonymizer can’t make them look more like each other
[12:40] alexf: second – percentage of “mutation” only affects the quality of mutation
[12:41] alexf: the less percent – the better quality, article looks more like original, no silly mistakes
[12:42] alexf: but to make article look different from source, you need to put high % of mutation
[12:45] Q2hl: so for example
[12:46] Q2hl: if you had settings x y and z settings all the articles would have some degree of difference with original articles
[12:47] Q2hl: if youhad settings x-3 y+5 and z-6 the degree of difference with the original article would be different
[12:48] Q2hl: meaning if your settings vary from article to article, your articles would have some variation in respects to legibility
[12:49] alexf: this is not necessary to make article look different
[12:49] alexf: even with same x y z it still will look very different
[12:50] alexf: because synonyms itself will be different each time
[12:51] Q2hl: I can’t help thinking that randomizing these settings would add another layer of differentiation, but Im just stubborn
[12:51] Q2hl: do you mind if I post this conversation in the blog?
[12:52] alexf: sure, no problem
Why do I post this conversation? Not because I am trying to make Alex include a feature that he considers unnecessary. Truth be said, I trust his judgement more than mine.
I just thought the discussion was bringing up interesting points. This is what he says;
The quick brown fox skillfully jumps over the lazy dog
- nouns: fox, dog
- verbs: jumps
- adverbs: skillfully
- adjectives: quick, brown, lazy
Now let’s change 100% on each variable:
Output 1: The fast red wolf rapidly dances over the sleepy can
Now let’s change 100% on each variable except for the adjectives where we’ll change 33%:
Output 2: The speedy brown four legged animal rightly runs over the lazy pet
Now here’s the question we were talking about. Output 1 is 100% different from the original sentence but in Output 2 we have except for:
- Output 1 is more unique and less readable.
- Output 2 is more readable and less unique.
Deciding the amount of readability vs. uniqueness that you are going to give to your content is one of the basic decisions you need to make when planning your Black Hat Strategies.
At a first glance I thought being able to randomize this decision upon the rewriting of individual articles was a good idea. Now I can see Alex’s point and agree that is not necessary. I can’t really see a benefit to having one batch of sites with some pages filled with less readable and more unique content, and some pages with the opposite equation.
I just thought the whole thing was worth a post because it makes us think about this very basic element of content rewriting. So the million dollar question is:
Do you want unique, non readable content? Or do you want more less unique, less readable content?
I think that debate deserves a whole new post, or maybe even series of related posts.
Share this post with your favorite community: