While the bot and the
team might do amazing work to keep the blog posts part of the steemit community free from crap by fighting plagiatism, and while
might do amazing work neutralising false flags, there is an other type of abuse that runs rampant on steemit and that currently goes unchecked.
While actual blog posts drive what is going on on steemit, a large part of the reward pool is consumed by the interaction that follows on some of these posts. While it may seem unfair that I could make twice as much from a twenty word comment on a post about bitcoin than I make on a 2000 word piece of fiction, that is just how steemit works, everyone chooses how they use their voting power, and that's OK.
The problem arises however when this fact starts being milked in a way that fools the potential upvoters into a false sense of interaction. Some of these attempts get obvious, especially after seeing them used again and again. We have all seen the "Great post!" type of comments that appear minutes after posting something new. Some like "Amazing photo, love it!" are more targeted and thus less obvious examples of the same.
There are accounts that are posting small sets of variations of comments like these as a way to milk the reward pool. Some of these accounts do so a number of hours a day, so these might be real people. Other accounts keep at it 24/7 and thus clearly are bots trying to pass off as real persons.
Could we fight this phenomenon ? Well, we could probably fight it, but if we did, it might end up an arms race. An arms race that would require coordinated effort, using both automated scripts and a human task force.
That is, to fight these bots acting as humans we would need the combined strength of our own bots and that of human anti-curration. Let me try to explain what I am proposing.
The thing I am proposing would take a three stage approach at fighting comment bots.
- Identification (bot)
- Validation (humans)
- Response (bots)
I'll walk through each of these below.
Identification bot
The first step is finding potential comment bots and what I'll refer to as human bots, the type of people simply pasting the same silly text in response to wide ranges of posts without even glancing over the at the article they are commenting on. I propose a central, open source bot aimed purely at detecting potential comment bots. I've done some work on such a bot, and I believe I can complete a first working version soon. Given the arms-race nature of the proposed process, though, I think relying on third party contributions, possibly backed by utopian.io would eventually be needed as bots might get smarter at avoiding detection.
Human Validation
The second step to the process would be human validation. This is the step I'm not yet confident about how to fit this in exactly. As this thing should be a group effort anyway, I would like to ask anyone to give this some thought and share your thoughts.
Personal response bots
When we have an up-to-date human validated list of comment bots, this list should be published. A personal response bot could use this list to perform two actions:
- Mute the comment bot (so its owner won't see it)
- Look for upvotes by blog post authors on comments to their posts by known comment bots.
If such an upvote is seen while the owners voting strength is at (nearly) 100%, for example, at least 99.95%, the personal response bot could downvotes the comment at 10% of maximum voting weight. The idea behind this is that the bot would act kinda like an away bot, active only when its owner is away for a longer period of time, and is not actively curtsying any content. As the bot only works with voting power that would otherwise stay unused, it runs for free from its owners perspective. And ass the bots only downvotes confirmed comment bots, it actually helps all legitimate posts getting a fairer share of the reward pool.
Who is in?
If you think this would be a good initiative, please interact below. I need your help, whatever help you can give if we want to make this a success. The above is just a rough outline. A lot of work will be needed to get this thing op and running, and a lot of people willing to take an active role. So please, comment, resteem and share your own ideas on how we can make this a success. If we can make this work, really work, comment bots could be made into an uneconomical use of resources for their creators and we might actually see the end to them.