Foreword — by 
My plan was to make a post arguing that Hive is currently in a slow death spiral due to low user retention rates, yet a death spiral is just the converse of a virtuous cycle, and if we could improve retention rates, we could reverse the downward spiral.
I thought the data from my previous posts supported this, but I had a nagging little issue with one of my previous conclusions. It was confirmed that user activity can slowly have an impact on price, but I was always assuming that was a positive impact. What if more user activity was actually bad for the network (due to increased costs and rewards extraction, perhaps)? So, I set Claude onto this issue. It ended up being quite a rabbit hole, with surprising results.
Research Findings — by Claude Opus 4.6
The following analysis was conducted at 's direction using HiveSQL data. I designed the study, wrote and ran the queries, performed the statistical analysis, and drafted these findings.
guided the investigation, challenged the reasoning at each stage, and reviewed the results.
What Doesn't Predict Price
Ask how Hive is growing and the discussion usually centres on accounts created, daily active users, or active authors. None of these predict the token price.
I retested Granger causality tests — a standard econometric method that asks whether knowing one variable's past helps predict another's future — on daily active authors and HIVE price across the full chain history. Using log returns (the standard transformation in financial econometrics), daily active authors do not predict price at any lag tested (minimum p = 0.65 across 21 lags)*. The reverse is overwhelming: price predicts user activity from lag 2 onward (p < 0.001). People show up when the price goes up, not the other way around.
Raw new account counts fare little better. At monthly resolution, new accounts narrowly miss significance (p = 0.055) — and that's before accounting for the fact that raw account numbers can be dominated by activity that has nothing to do with the social network. During the Splinterlands surge (July 2021 – January 2022), Hive was creating over 125,000 accounts per month — compared to roughly 5,800 in a typical month. Those accounts were created for a play-to-earn game, representing fundamentally different on-chain activity from social network participation. We can't fairly measure their retention against social metrics, and we can't easily measure it against game metrics either — Splinterlands changed how its on-chain transactions worked over time, and without social proof it's impossible to distinguish unique players from multi-accounts and bots. Either way, these months dominate any analysis that counts raw accounts.
So if the standard metrics don't predict price, does that mean growth doesn't matter? No. First we must be sure that we measure the right thing.
What Actually Predicts HIVE's Price
I built a different metric: for each month from April 2016 through June 2024 (99 months total), I counted accounts created that month that made at least one top-level blog post three months later. This is specifically a measure of social network growth — it captures people who joined Hive as a blogging platform and were still using it three months on. It doesn't capture game-only users, and it's not meant to. A single blog post is a low bar, but in practice it's a reasonable proxy for a real person engaging with the platform.
I then ran Granger causality tests at monthly resolution, excluding the Splinterlands surge period and verifying results hold with or without it.
| Test | Period | Significant lags | Direction | p-value |
|---|---|---|---|---|
| Retained users → Price | Full (2016–2024) | months 7–10 | Positive | < 0.001 |
| Retained users → Price | Hive era | month 4 | Positive | 0.04 |
| Retained users → Price | Hive excl. Splinterlands | month 4 | Positive | 0.04 |
| Raw new accounts → Price | Hive excl. SPL | — | — | 0.055 (not sig) |
| Price → Retained users | Hive excl. SPL | months 1–3 | Positive | 0.017 |
| Retention rate → Price | All periods | — | — | Not significant |
| Price → Retention rate | All periods | — | — | Not significant |
Retained new accounts predict price increases 4 months later in the Hive era (p = 0.04) and 7–10 months later across the full Steem+Hive history (p < 0.001). The direction is positive: more retained users, higher future price.
A caution on the Hive-era result: p = 0.04 is borderline. With multiple variables tested across multiple periods, a Bonferroni correction would push it past 0.05. The full-period result (p < 0.001) is the stronger anchor; the Hive-era result is consistent with it but should be treated as suggestive rather than definitive on its own.
What About Sell Pressure?
The strongest version of the "growth hurts price" argument goes: more users → more reward payouts → more selling on exchanges → lower price. If this is true, growing the user base is actively harmful — users extract more value than they create.
I tested this directly. Using daily data from HiveSQL on author reward payouts, HIVE transfers to 18 tracked exchange accounts (Bittrex, Binance, Huobi, Upbit, MEXC, and others), and token price, I fit a 4-variable VAR model on the Hive era and tested every link in the hypothesized chain.
It breaks at the first link. User activity does not predict exchange outflows (p = 0.06). The reward pool is set by inflation, not by the number of active authors — more authors dilute each other's share but don't change the total paid out. After controlling for exchange flows, user activity has zero additional predictive power for price (F = 0.52, p = 0.93).
Exchange outflows do predict price (p = 0.003) — there is real sell pressure in the system. And rewards do predict exchange outflows (p < 0.001) — some reward recipients sell. But the amount of selling is driven by price dynamics and the fixed inflation schedule, not by how many people are posting.
The strongest apparent signal was rewards → price, significant at every daily lag. But this turned out to be largely a statistical artifact. When rewards are measured in USD rather than native HIVE/HBD, the negative coefficient flips to positive: more USD rewards predict price going up, not down. The original negative signal was a denomination effect — when price falls, the blockchain prints more HIVE per payout, so "HIVE rewards up" and "price down" move together mechanically.
The data does not support the hypothesis that more users create sell pressure that hurts the price. We can't rule out small, diffuse effects that daily aggregates are too coarse to detect — but large, systematic sell pressure from user growth is not visible in eight years of on-chain data.
Four Things the Data Tells Us About Growth
1. It's the absolute count, not the rate.
Retention rate — the percentage of newcomers who stay — does not predict price in any period tested. What matters is the absolute number of people who stay. Onboarding 200 retained users from 4,000 signups produces the same price signal as 200 retained from 10,000 signups.
This sounds like it argues against investing in retention quality. It doesn't — it argues for investing in both acquisition and retention, because the absolute count is acquisition × retention rate. But it does mean that obsessing over the retention percentage in isolation misses the point. A community with 3% retention that onboards 10,000 people a month (300 retained) is doing more for the network than one with 50% retention that onboards 20 a month (10 retained).
2. Bear markets don't kill retention.
Price does not significantly predict retention rate in any period tested (p > 0.27 everywhere). Whether HIVE is at $0.90 or $0.20, the proportion of newcomers who stay is statistically indistinguishable. Investing in onboarding and retention during bear markets isn't fighting a losing battle — the users you retain are just as likely to stick around as those who joined during a bull run.
3. The relationship is bidirectional — but the forward signal is distinct.
Price does predict retained-user counts at lags of 1–3 months (p = 0.017): higher prices attract more people who stay. But the retained → price direction (at lag 4) is a separate signal from the price → retained direction (lags 1–3). Both exist, and neither absorbs the other.
4. Price scales proportionally with retained users.
A log-log regression of price on retained-user counts (with a time trend) for the Hive era gives an elasticity of 0.99: a 10% increase in retained users associates with roughly a 10% increase in price. This is an association, not a proven causal effect — but it tells us there's no evidence of diminishing returns to network growth in this data. The hundredth retained user appears to contribute as much as the tenth.
For comparison, the same regression using active authors (total posting users, not just new retained ones) gives an elasticity of only 0.20 — five times weaker. The price signal is specifically in new retained users, not in total posting activity.
Metcalfe's law would predict an elasticity of 2.0 (value grows with the square of users). We find 1.0. The network effect exists but it's linear, not superlinear. The quadratic term is not significant (p = 0.57).
The Retention Flywheel
The findings above describe a feedback loop. Retained users predict higher prices (lag 4 months). Higher prices predict more retained users (lags 1–3 months). This is a flywheel — and we can estimate how strong it is.
The forward elasticity is ~1.0: a 10% increase in retained users associates with a ~10% price increase. The reverse elasticity is ~0.31: a 10% price increase associates with ~3.1% more retained users one month later. Multiply these together and you get a loop gain of ~0.31 — meaning each turn of the flywheel preserves about 31% of the previous impulse.
That translates to a total amplification of roughly 1.45×. A boost to retained users doesn't just produce the initial price association — the feedback loop amplifies it by about 45% as it cycles through price → acquisition → retention → price.
Crucially, the reverse channel runs entirely through acquisition, not retention quality. Higher prices bring more people through the door (p = 0.0003); they don't change the proportion who stay (p = 0.30). Retention rate is the one variable in this system that price doesn't move — which means it's the variable that human effort can shift without fighting the market.
This is what makes retention work so leveraged. Acquisition can be boosted by spending money (as InLeo's onboarding campaign demonstrated) or by building apps that attract non-crypto audiences — but it's expensive, and the data shows price is a strong driver regardless. Retention rate responds to onboarding quality, community infrastructure, curation, and first-post rewards — things people can build without outspending the market. And because retention rate multiplies acquisition at every turn of the flywheel, improving it compounds.
In our rewards and retention post, we found that 40% of newcomers earn literally zero rewards, and that the jump from $0 to even a few cents nearly triples retention. In the community density post, we found that communities like Aliento retain newcomers at 46% versus a 19% baseline — primarily through dedicated in-language curation and fast comment response. Those findings now have an economic dimension: each improvement to retention rate doesn't just keep more people around, it strengthens the multiplier on a feedback loop that connects to price.
The flywheel is moderate, not explosive — a 1.45× amplifier, not a 10× one. In an earlier post, modeled what would happen if Hive could improve retention, and found that naive exponential projections "fail the smell test." The econometric data agrees: the loop is real but it's linear (elasticity 1.0, not Metcalfe's 2.0). Hive won't explode from better retention. It will grow steadily — and that steady compounding is exactly what a sustainable network needs.
The answer to "is growing Hive worth it?" is yes — and retention is the lever. The metric that predicts price is not accounts created, not daily active authors, not posts published. It's people who joined and were still here three months later. And the single most effective way to increase that number is to make those first three months worth staying for.
Caveats
The Hive-era monthly result is borderline (p = 0.04). With 52 monthly observations and 6 lags tested, this is significant at conventional levels but not overwhelming. The full-period result (p < 0.001 across 98 months) is much stronger. The Hive-era finding should be treated as suggestive, not definitive.
Multiple testing. We tested multiple variables (raw accounts, retained accounts, retention rate, active authors) across multiple periods and lag structures. Some correction for multiple comparisons is warranted. The full-period retained-accounts result survives any reasonable correction; the Hive-era-only result may not.
Granger causality ≠ true causality. Granger tests measure predictive power, not causal mechanisms. A third variable (overall crypto sentiment, marketing campaigns, app launches) could drive both retained-user growth and subsequent price increases.
The Splinterlands exclusion is a judgment call. I excluded July 2021 through January 2022 on the basis that these months are dominated by game-account creation that is qualitatively different from social user onboarding. This is defensible, but the results should be checked for sensitivity to the exact exclusion window. We cannot say whether Splinterlands helped or hurt the price — the account surge coincided with the 2021 crypto bull market, and the two effects are impossible to separate.
"Retained" means posted again in month M+3. A user who posted once in their fourth month and never again counts as retained. This is a low bar. A stricter definition (e.g., posted in months M+3 and M+6) would produce a cleaner signal but smaller sample sizes.
The elasticity of 0.99 is a level association, not a causal estimate. It describes how price and retained users co-move after removing a linear time trend, not the effect of adding one more retained user.
The sell-pressure finding applies at the daily aggregate level. It's possible that individual-level reward cashouts create sell pressure too diffuse to detect in daily aggregates, or that the effect operates at horizons beyond the 21-day lag window we tested. The analysis rules out large, systematic sell pressure — not small, distributed effects.
The flywheel amplification is approximate. The 1.45× estimate comes from multiplying two level-regression elasticities (forward and reverse) and treating the result as a geometric series. This assumes the loop operates cleanly across lags, which simplifies the actual dynamics. The qualitative point — that a feedback loop exists and retention rate is the lever humans can pull — is robust; the precise amplification factor should be treated as illustrative.
The daily Granger null is specification-dependent. The original first-differences specification produced a borderline signal (p = 0.045). The log-returns specification (standard in financial econometrics) produces a clear null (p > 0.65). Both are defensible transformations. We favor log returns because they normalize for scale, but we should be transparent that the null result depends on this choice.
Data: HiveSQL, queried May 2026. Daily data: 3,173 days (April 2016 – December 2024). Monthly retention: 99 cohort months (April 2016 – June 2024). Exchange accounts tracked: Bittrex, Binance, Huobi, Upbit, MEXC, and 12 others. Reward data from VOAuthorRewards; staking data from TxTransfers (transfer_to_vesting). All analysis code available at GitHub repo. For the full retention series, see 's collection post.
What's Next
The data above shows that retained users predict price — and that there's no detectable sell pressure from user growth at the aggregate level. But one response I expect to this post is a version of the question recently put to me:
"While retention rates are high, are these users benefiting the economy or just draining it? If they are draining it, is seeking a higher retention among them a good idea?"
In other words: the aggregate sell pressure finding is fine, but what about specific communities? What about the users from countries where a few dollars of HIVE rewards is a meaningful income — are they the ones pulling value out while everyone else builds?
That's the subject of the next post.
* This contradicts an earlier finding. Improved methodology reduced that weak signal to a non-finding. The finding using retained users is substantially stronger than the previous finding relating to new authors.