Good evening to everyone , around 34 hours from now my proposal for SPORTS delegation for Engagement Project will be passed if none of the voters reverse their decision . Before that , I would like to address some fears of few fellow members like .
Concerns
Although I got overwhelming support from other community members like ,
and other large stake holders , I couldn't convince
to favour my proposal , I would like to address his fears with an explanation so that if any other community members have same fear , it will be addressed .
In a recent post made by which is titled "What do you think about the SPORTS Engagement Project" , there are very encouraging comments by other community members except one -
is a bot, I can imagine that one bot will spam/comment and another bot (amr008.sports) will upvote it and spam with another comment. This is not engagement! This is what I want to downvote.
I completely understand the fear but I will try to answer him -
Two things
- Yes
is a bot which replies to top 25 engagers everyday telling their amount of comments, number of authors they have talked to and their rank .
- There are three reasons for this -
- We are still an young project who need visibility and awareness . We need people to know that engagement matters and quality comments are rewarded .
- To tell people how they are doing and to motivate them to improve their ranking and the number of comments / number of authors they talk to stats.
- All the rewards the bot gets ( if any ) will go to engagement project itself and will be paid to delagators which will further encourage participation and delegation .
- No the bot doesn't upvote bots . It is not at all possible , there is absolutely 0 chance that the bot upvotes another bot - Why ?
- Because they are not made from the sportstalksocial frontend . I will attach the code snippet here which any other member here can verify -
Code snippet
Test_Query = pd.read_sql_query(''' select * from Comments where parent_author <> '' and created > GETDATE()-2 ORDER BY created DESC ''',conn)
save_list=[]
ignore_list=[]
c=0
for i in range(0,len(Test_Query)):
try:
if(Test_Query['created'][i].date()== (dt.utcnow().date()- timedelta(1))):
json_app= json.loads(Test_Query['json_metadata'][i])
if 'app' in 'json_app':
save_list.append([Test_Query['author'][i],Test_Query['parent_author'][i],'@'+Test_Query['author'][i]+'/'+Test_Query['permlink'][i],json_app['app'],Test_Query['created'][i].date(),Test_Query['body'][i]])
except:
ignore_list.append([Test_Query['author'][i],'@'+Test_Query['author'][i]+'/'+Test_Query['permlink'][i]])
c=c+1
df_need=pd.DataFrame(save_list)
df_need_leo=df_need[df_need[3].str.startswith('leofinance')].reset_index()
df_need_stem=df_need[df_need[3].str.startswith('stemgeeks')].reset_index()
df_need_ctp=df_need[df_need[3].str.startswith('clicktrackprofit')].reset_index()
df_need_sports=df_need[df_need[3].str.startswith('sportstalksocial')].reset_index()
This is a replica of what I use in my script .
I will explain this part step by step -
Test_Query = pd.read_sql_query(''' select * from Comments where parent_author <> '' and created > GETDATE()-2 ORDER BY created DESC ''',conn)
This will read all the comments from past 2 days .
save_list=[]
ignore_list=[]
c=0
for i in range(0,len(Test_Query)):
try:
if(Test_Query['created'][i].date()== (dt.utcnow().date()- timedelta(1))):
json_app= json.loads(Test_Query['json_metadata'][i])
if 'app' in 'json_app':
save_list.append([Test_Query['author'][i],Test_Query['parent_author'][i],'@'+Test_Query['author'][i]+'/'+Test_Query['permlink'][i],json_app['app'],Test_Query['created'][i].date(),Test_Query['body'][i]])
except:
ignore_list.append([Test_Query['author'][i],'@'+Test_Query['author'][i]+'/'+Test_Query['permlink'][i]])
c=c+1
This will take each row and stores the following -
- Author , Parent_author , Permlink , App , Date , Body.
Here the App is very important .
Why ? It tells from which front-end the comment was made -
Ex:
Image Source : https://hiveblocks.com
This is for my recent post made from leofinance frontend - you can see the app part here .
Now let's move to bots -
As you can see here - the app/frontend is "beem".
Edit: I got to know that there is a way to use our own defined frontend name instead of beem . Although its rare that anybody would use this method it is a possibility . In order to prevent this - I will be checking similar comments , repeated comments ( same comments ) etc to make sure this method won't be used to abuse the system.
I am tagging here to verify or correct me if I am wrong . Also tagging
.
So where am I checking the 'app' part?
code -
df_need=pd.DataFrame(save_list)
df_need contains -
If you can see here - the column (3) contains the app/frontend details -
now this particular code -
df_need_leo=df_need[df_need[3].str.startswith('leofinance')].reset_index()
df_need_stem=df_need[df_need[3].str.startswith('stemgeeks')].reset_index()
df_need_ctp=df_need[df_need[3].str.startswith('clicktrackprofit')].reset_index()
df_need_sports=df_need[df_need[3].str.startswith('sportstalksocial')].reset_index()
Will store only those which matches particular front-end and ignores others -
df_need_sports output -
If you see this , there doesn't exist any comments which are not made from front-end other than sportstalksocial . So this script calculates only those comments which are made from the sportstalksocial frontend for SPORTS curation .
I am taking measures to prevent spam
The project concerns itself only with comments and not posts because there is already
account which does that.
I am fine tuning the code everyday to introduce more criteria so that the best quality commentors are chosen .
This is it from my side . This post is meant to address the fears of the community members and make them aware that this project is not to help few people / bots in any way but to reward genuine users who put effort in engaging with others and contributing to the community .
Finally I would like to thank all those who actually raised questions about this because otherwise I think I would not have posted this explanation . Now I can redirect others here to read about the project.