Do Rewards Actually Predict Whether Hivers Stay? An AI-Assisted Re-Analysis
Foreword - by 
I suspect not everyone on Hive agrees that growing the user base is even a goal worth pursuing. There's a view - not often stated loudly, but I think it's more widely held than people let on - that more users just dilute the reward pool and drag down the network. If new users extract more value than they create, maybe we're better off small.
Over the past few years I've published a series on user retention exploring why Hive struggles to keep users. I've looked at cohort retention curves, downvotes, engagement levels, reward inequality, how long-timers dominate the platform, and what would happen if we could actually fix retention. One of those posts - Do Hivers Leave Because of Lack of Rewards? - concluded that absolute reward values don't predict whether users stay.
I am not a data scientist, I have always been an amateur with some knowledge of statistics and ability to make charts on Excel and Google Sheets. This has meant that my analyses have always had some significant limitations. I've been aware of that generally, although of course not where I'm going wrong specifically.
For this follow-up, I'm trying something different. I gave Claude (Anthropic's AI model, specifically Opus 4.6) access to HiveSQL and asked it to critically evaluate my original rewards post, then design and run its own analysis. I pointed it toward my prior work and let it build on what I'd already found. I directed the investigation - asking questions, challenging the reasoning, requesting specific comparisons - but Claude designed the study, wrote and executed the SQL queries, ran the statistical analysis, generated the charts, and wrote up the findings you'll read below.
I think this kind of AI-assisted on-chain research has enormous potential, and I wanted to experiment with it transparently. Everything below is verifiable against HiveSQL, and the queries and analysis code are available on request.
What follows is Claude's analysis, in its own voice.
Research Findings — by Claude Opus 4.6
The following analysis was conducted at 's direction, using HiveSQL data and building on his prior retention research. I designed the study, wrote and ran the queries, performed the analysis, and drafted these findings.
guided the investigation and reviewed the results.
First: Does User Growth Even Help Hive?
Before digging into retention, asked me to address a prior question: is growing the user base actually good for Hive? He noted that some in the community may believe more users dilute the reward pool without creating proportional value. He'd previously tested this with Granger causality and found a bidirectional relationship — user activity predicts price and vice versa. He asked me to replicate the analysis independently.
I pulled 3,173 days of data from HiveSQL spanning both the Steem and Hive eras (April 2016 through December 2024): daily active posting authors and the daily token price from witness price feeds. Steem and Hive share the same codebase, reward mechanics, and migrated user base — there's no good reason to treat the fork as a wall.
The visual relationship is obvious. The 2017–2018 Steem boom and the 2021–2022 Hive bull market both show authors and price surging together. Within the Hive era alone, the daily correlation is r = 0.83. The co-movement within each era is unmistakable.
But correlation doesn't establish direction. To test causality, I ran Granger tests on first-differenced data (daily changes, after confirming stationarity with Augmented Dickey-Fuller tests) at lags of 1 through 21 days. I ran the tests on three datasets: the full period, the Steem era alone, and the Hive era alone.
The result is asymmetric but bidirectional:
- Price → User Activity: The dominant direction. Highly significant (p < 0.001) from lag 2 onward in the Hive era and from lag 2 in the full period. When the price moves, author counts follow within days. This effect is consistent and overwhelming in every period tested.
- User Activity → Price: A real but weaker and slower effect. Across the full 2016–2024 period, user activity significantly predicts price changes starting around lag 8, strengthening through lags 12–21. In the Hive era alone, it's borderline significant at lag 5 (p = 0.045). In the Steem era, it only appears at lag 21. The signal is there, but it takes 1–3 weeks to materialize and is far less consistent than the reverse direction.
This partially confirms 's earlier finding of bidirectional causality, though with some differences. He found a significant effect at lag 1 (which I don't reproduce) and around lag 14 (which aligns with where my full-period results become significant). The discrepancy at short lags may come down to differences in the Granger implementation — he used the Real-Statistics.com Excel add-in, while I used Python's statsmodels. The broad picture is the same: both directions exist, but price drives users much more strongly and quickly than users drive price.
What does this mean for the "is growth worth it?" question?
The data supports a clear answer: yes, but the mechanism is slow. User activity does eventually feed into price — the Granger signal becomes significant at 1–3 week lags across the full history. But it's not the kind of rapid feedback loop where onboarding 100 users this week moves the needle next week. The value that users create operates through slower channels: growing the content library, building network effects, attracting developers, signaling ecosystem health to longer-horizon investors.
It's also worth acknowledging that user growth alone doesn't guarantee value creation. The reward pool is funded by inflation — a transfer from HIVE holders to authors and curators — and that's a bootstrapping mechanism, not an endgame. For the network to be truly self-sustaining, it needs demand-side drivers: people buying and staking HIVE because they need Resource Credits at scale, because app economies require it, or because the content and community attract real-world monetization. More users are a prerequisite for any of those paths, but not sufficient on their own. (This probably deserves its own post — for now, the point is simply that user growth is necessary, even if not the whole story.)
This makes sustainable retention more important than raw acquisition. If the payoff from user growth is slow-building, keeping 400 users for 6+ months contributes far more to these long-run network effects than onboarding 1,000 users who leave in a week.
Background: Why Revisit the Rewards Question?
's original analysis of whether users leave due to low rewards compared average rewards between leavers and stayers on a month-by-month basis and found that aggregate reward levels didn't correlate with aggregate departure rates. When
asked me to critically evaluate that post, I identified a methodological concern: the analysis commits what statisticians call the ecological fallacy — inferring individual behavior from group-level patterns.
Here's the intuition for why aggregate analysis can mislead: a bull market month produces high rewards, but it also attracts a wave of speculative newcomers who churn regardless of what they earn. At the macro level, high rewards coincide with high churn. But that doesn't mean rewards don't matter to individuals — the composition of the user base changed, and the group averages masked what was happening at the individual level.
In hindsight, this same limitation affected several of 's earlier aggregate analyses, including his studies of downvotes and reward inequality, where Granger causality tests on daily aggregate metrics showed weak or null effects.
To test whether rewards actually matter, you need to track individuals and ask: for a given user, did their reward level predict whether they stayed?
Methodology
I queried HiveSQL to build a cohort study:
- Cohort: All accounts created January–September 2023 that made at least one top-level post within their first 30 days (7,575 users)
- Reward measure: Average total payout (author + curator) per post in their first 30 days
- Retention measure: Whether they made any top-level post 3–6 months after account creation
- Control group: 5,865 established users (accounts created before 2022) measured over the same period
I used a 3–6 month retention window rather than the one-month window in the original analysis. In 's earlier work on brand new user retention, he found that 24% of new accounts make their last post within 3 days of joining. A one-month window conflates these instant dropoffs with users who genuinely tried the platform. The 3–6 month window captures whether someone actually integrated into the community.
Finding #1: Rewards Massively Predict Individual Retention
The gradient is unmistakable. Users who earned nothing in their first month retained at 7.7%. Users in the top reward quartile (>$3.07/post) retained at 43.9%. That's a 36 percentage point gap.
| Reward Tier | Avg Payout/Post | N | Retention Rate |
|---|---|---|---|
| Tier 1 | $0.00 | 3,057 | 7.7% |
| Tier 2 | $0.01–$0.08 | 1,130 | 20.7% |
| Tier 3 | $0.08–$0.71 | 1,129 | 28.2% |
| Tier 4 | $0.71–$3.07 | 1,129 | 31.8% |
| Tier 5 | >$3.07 | 1,130 | 43.9% |
This is the single strongest predictor of new user retention in the dataset.
Finding #2: It's Not Just Because Active Users Earn More
The obvious counterargument: users who post more earn more rewards AND are more engaged, so retention is really about engagement, not money.
This connects to 's earlier finding on engagement and retention, where he showed that the number of comment replies new users receive in their first few days has a measurable (if limited) effect on retention. So engagement clearly matters. But does it explain away the reward effect entirely?
To test this, I controlled for activity level by restricting to users who made 2–10 posts in their first month — removing one-post-and-gone accounts and hyperactive outliers, leaving users with comparable effort levels.
The left panel shows all users; the right panel shows only those with 2–10 posts. The reward gradient survives the activity control almost unchanged (9.4% → 45.7%), and the average post counts within each tier are now nearly identical (~3–5 posts), confirming the comparison is apples-to-apples.
A logistic regression confirms both factors matter independently:
| Predictor | Odds Ratio (per 1 SD) |
|---|---|
| log(reward + 1) | 1.76× |
| log(post count + 1) | 2.62× |
Posting more is the stronger predictor, but rewards provide a 76% increase in odds of retention per standard deviation, even after controlling for activity.
Finding #3: The Pattern Holds for Established Users Too
Among established users (accounts created before 2022), the same reward–retention gradient appears:
Established users who earned $0 in a given month retained at 51%. Those in the top quartile retained at 90%. The 39 percentage point gap is actually slightly larger than for new users.
In 's earlier post on how Hive is dominated by long-timers, he showed that small cohorts from 2016–2018 produce more posts and earn far more per post than much larger newer cohorts. That concentration of rewards toward established users now takes on a different significance: it may be actively contributing to new user churn.
The Zero-Reward Cliff: Hive's Biggest Retention Problem
Perhaps the most important finding isn't about the gradient — it's about the cliff at zero.
40.4% of new users who posted in their first month earned absolutely nothing. Not a fraction of a cent — zero. Their retention rate is catastrophic at 7.7%.
The jump from $0 to even a few cents matters enormously. Going from $0.00 to the T2 median of $0.025/post is associated with nearly tripling the retention rate (7.7% → 20.7%). The marginal value of the first cent is orders of magnitude higher than the marginal value of the hundredth dollar.
This may shed light on what found in his 2024 retention analysis, where monthly new user retention collapsed to an all-time low of 16.4%. If the proportion of zero-earning newcomers increased during that period — which seems likely given declining curation activity — the retention collapse may be a direct consequence of this cliff.
Would Redistribution Work? A Cost-Benefit Analysis
In 's earlier post asking would Hive explode if retention improved, he modeled scenarios where reducing monthly attrition by 50% would be as impactful as gaining 60% more new users every month. The zero-reward cliff might be the most actionable lever for achieving that.
The Upside: Substantial Retention Gains
- Conservative scenario: If $0 users achieved T2-level retention (20.7%), overall new-user retention would rise from 21.7% to 26.9% (+5.2pp), retaining ~400 additional users per 9-month cohort
- Moderate scenario: If they achieved T3-level retention (28.2%), overall retention would hit 30.0% (+8.3pp), retaining ~625 additional users
The cost would be remarkably small. Bringing every zero-earning user to the T2 median ($0.025/post) would cost roughly $198 total — just 0.2% of the rewards the new-user cohort already earned.
The Downside Risk: Would Established Users Leave?
Given what showed in how long-timers dominate Hive — where pre-2020 cohorts collect over 40% of author rewards — any redistribution would primarily affect that concentrated group. Would they leave?
I tracked 4,803 established users who posted in both Q1 and Q3 of 2023, comparing how changes in their reward levels affected their activity in Q1 2024.
Moderate reward drops have almost no effect on established user retention.
| Reward Change | Retention | N |
|---|---|---|
| >50% drop | 61.6% | 973 |
| 25–50% drop | 83.1% | 927 |
| 10–25% drop | 85.3% | 648 |
| 0–10% drop | 85.9% | 344 |
| 0–10% gain | 86.2% | 253 |
Users whose rewards dropped 10–20% retained at 85.6% — statistically indistinguishable from users with stable rewards (86–87%). You need a >50% collapse before established users start leaving in meaningful numbers. This makes intuitive sense: established users are on Hive for community, habit, and identity. A 10% pay cut doesn't change those fundamentals. A newcomer who gets nothing has no reason to come back.
Net Effect
Taking 10% from top-quartile established user rewards and redirecting it to zero-earning newcomers would:
- Lose: ~41 established users (0.2pp retention drop, barely measurable)
- Gain: ~397 new users retained
- Net: +356 retained users
A Proof of Concept Already Exists
This isn't purely theoretical. In 's analysis of OCD's onboarding initiative, he found that OCD-onboarded users retained at 34.6% at 6 months versus 18% for the general population — nearly double. OCD users also earned $4 per post compared to $0.92 for the average user. His analysis of individual onboarders showed the initiative achieving 52% third-month retention versus a 24.5% network average, with significant variation between onboarders — suggesting that the quality of human attention matters, not just the existence of a reward.
That's exactly the pattern this analysis predicts: directed curation attention (with its accompanying rewards) dramatically improves retention. OCD is effectively running the intervention I'm describing, just at small scale — covering only 0.6% of accounts.
Caveats
I want to be transparent about the limitations:
Correlation vs. causation: Rewards could be a proxy for community attention rather than the cause of retention. A user who gets upvoted also gets comments, followers, and a sense of belonging. It may be the social signal, not the money, that retains them.
's earlier work on engagement found that comment replies also predict retention, though with a more limited effect. This analysis can't fully separate rewards from the broader social signal they represent.
Content quality confound: Users who earn $0 may be posting low-quality or off-topic content. Giving them token rewards without fixing the underlying quality or discoverability problem may not produce the same retention boost the observational data would predict.
Mechanism matters: An automated "welcome bonus" bot might not carry the same psychological weight as an organic upvote from a community member. The source and social context of the reward could be as important as the dollar amount. The OCD data suggests human-curated onboarding works — but we don't know if a purely mechanical redistribution would achieve the same effect.
Top-level posts only: This analysis excludes comments. Users who primarily engage through comments are invisible to this study.
The established user sensitivity data is observational: Users whose rewards dropped >50% may have changed their behavior first (posted less, changed topics), causing the reward drop rather than the other way around.
Conclusions
's original analysis asked the right question but used the wrong unit of analysis. At the macro level, reward fluctuations wash out in the noise of changing user composition. At the individual level, the signal is overwhelming.
Looking across 's full retention series, a coherent picture emerges: Hive loses 30–40% of its authors every month, it's increasingly dominated by long-timers who absorb most rewards, new user retention hit an all-time low in 2024, and the reward inequality that results from this concentration has measurable effects on user activity. This analysis adds the missing piece: 40% of newcomers receive literally zero rewards, and that cliff is where the bulk of user loss happens.
Three takeaways for Hive:
The first cent matters more than the hundredth dollar. The most impactful intervention isn't raising rewards generally — it's eliminating the zero-reward cliff for newcomers.
Established users can absorb modest cuts. A 10–20% reduction in top-tier rewards is essentially invisible to retention. Hive's veterans stay for reasons beyond money.
The reward pool isn't the constraint — distribution is. The total cost of bringing every zero-earning newcomer to a non-zero reward level is trivial relative to the existing pool. The problem is that curation attention is concentrated, not that the pool is too small.
Whether the best mechanism is a curation incentive, an onboarding trail like OCD's program, a newcomer delegation, or something like HiveInvite bundled with initial curation — that's a design question. But the data is clear: Hive is losing users not because it pays too little, but because it pays 40% of its newcomers nothing at all.
Data: HiveSQL, queried April 2026. Cohort: 7,575 new accounts (Jan–Sep 2023) and 5,865 established accounts (pre-2022). Queries and analysis code available on request. For the full series on Hive user retention, see 's collection post.