Foreword — by 
Years ago I used some basic Granger causality to find that low user engagement may be a factor in user retention. Today I want to revisit that and explore it in greater depth. It turns out, not only is engagement quite important for user retention, especially on your first post, but even bot responses appear to help.
Further, the decline and lack of buzz around our platform in general has contributed to a gradual decline in engagement, and that appears to contribute to the slow death spiral that the network finds itself in.
I have a second post coming today, which describes what appears to be the main cause of the malaise...
Research Findings — by Claude Opus 4.6
This analysis was conducted at 's direction, building on the engagement findings from the community density post and the retention flywheel framework from the users and price post. I designed the study, wrote and ran the queries against HiveSQL, performed the statistical analysis, and drafted these findings.
directed the investigation and reviewed the results. Everything below is verifiable on-chain.
The Question
In the community density post, we found that the Spanish-speaking community retains newcomers at 2-3x the platform baseline — and that the engagement experience explains much of the gap. Venezuelan+Spanish newcomers get their first comment in 112 minutes; non-retained American newcomers wait nearly two days. 98% of retained Spanish newcomers received comments on their first post; only 42% of non-retained Americans did.
That finding raised an obvious follow-up: is the engagement gap widening? Has Hive gotten worse at welcoming newcomers, and if so, how much does it matter?
How a Newcomer's First Day Has Changed
I measured two things for every monthly cohort from April 2016 through December 2025 (117 months): the percentage of newcomers whose first post received at least one reply from another user, and how long they waited for that reply.
Reply rate — the chance of getting any response at all:
| Era | Typical reply rate | Typical cohort size |
|---|---|---|
| Steem 2017 H2 | 75% | ~23,000/month |
| Steem/Hive 2018 | 76% | ~36,000/month |
| Hive 2021 | 68% | ~2,500/month |
| Hive 2024 | 45% | ~1,300/month |
| Hive 2025 | 56% | ~1,000/month |
Median time to first reply:
| Era | Median wait |
|---|---|
| Steem 2017 H2 | 8 minutes |
| Steem/Hive 2018 | 16 minutes |
| Hive 2021 | 42 minutes |
| Hive 2024 | 2 hours 39 minutes |
| Hive 2025 | 58 minutes |
In mid-2017, a newcomer posting to this blockchain had a 4-in-5 chance of getting a reply, typically within minutes. By 2024, it was closer to a coin flip, with the typical wait measured in hours.
The 2025 numbers show a partial recovery — reply rates climbing back to 56% and median wait times dropping to under an hour. But this improvement needs an asterisk, which we'll get to shortly.
Median vs. Mean: The Typical Experience Is What Matters
Early in this investigation, we computed average time-to-first-reply and got numbers ranging from 700 to 44,000 minutes — figures so large they seemed absurd. The problem: a small number of posts receive their first reply weeks or months later, pulling the average into meaninglessness. The median is the honest metric — it tells you what the typical newcomer actually experienced.
At Hive's peak engagement periods, the median newcomer waited 2-6 minutes for their first reply. They posted, and someone responded before they'd finished reading through other posts. Today's median of 1-3 hours means many newcomers close the tab before anyone speaks to them.
The Human Reply Rate
Not all replies are created equal. Accounts like ,
,
, and dozens of other automated services reply to posts with badge notifications, token tips, or template welcomes. These replies are real — they show up in the newcomer's notification feed — but they're not the same as a human reading your post and responding to it.
I classified approximately 40 accounts as bots or automated services and separated the reply rate into human and total components:
| Era | Total reply rate | Human reply rate | Gap |
|---|---|---|---|
| Steem 2017 H2 | 75% | 64% | 11pp |
| Steem/Hive 2018 | 76% | 56% | 20pp |
| Hive 2021 | 68% | 61% | 7pp |
| Hive 2024 | 45% | 36% | 9pp |
| Hive 2025 | 56% | 40% | 16pp |
The 2025 "recovery" in total reply rate is substantially bot-driven. Human reply rates improved only modestly (from 36% to 40%), while the gap between total and human reply rate widened to 16 percentage points — the largest in the dataset. More newcomers are getting automated notifications, but the underlying human engagement hasn't recovered proportionally.
Does Engagement Actually Predict Retention?
Previous work established a correlational link between engagement and retention at the community level. Can we measure the relationship more precisely?
The aggregate picture
I built a multi-factor regression model across 103 monthly cohorts, predicting retention from four variables: human reply rate, community size (active posting authors), cohort size (selection effect), and token price.
90-day retention model (R² = 0.45):
| Factor | Effect | Interpretation |
|---|---|---|
| Human reply rate | +0.24pp per 1pp | A 10pp increase in human reply rate associates with +2.4pp retention |
| log(cohort size) | +8.6pp per doubling | Larger cohorts retain better in absolute terms after controlling for other factors |
| log(active authors) | -12.0pp per doubling | Confounded by era — see discussion |
| Token price | -2.5pp per $1 | Not significant after controlling for cohort size |
1-year retention model (R² = 0.58):
| Factor | Effect | Interpretation |
|---|---|---|
| Human reply rate | +0.32pp per 1pp | A 10pp increase in human reply rate associates with +3.2pp retention |
| log(cohort size) | +4.6pp per doubling | Weaker at 1yr — selection washes out over time |
| log(active authors) | -11.9pp per doubling | Same confound |
| Token price | not significant |
Human reply rate is a significant positive predictor of retention at both time horizons. The negative coefficient on active authors is counterintuitive and requires explanation: it's confounded with era. The early Steem period had both the most active authors and the lowest retention rates (because enormous surges brought poorly-selected cohorts). After controlling for cohort size, the active-author coefficient captures the remaining era effect. In simpler models, community size is positively correlated with engagement and retention.
The individual picture
The aggregate model tells us that months with better engagement have better retention — but that could be a coincidence of composition rather than a causal link. To test whether engagement directly matters to individual newcomers, I classified 59,215 newcomers from the 2021-2023 cohorts by the type of response their first post received:
| What happened | n | 90-day retention | 1-year retention |
|---|---|---|---|
| No reply at all | 19,852 | 27.5% | 5.5% |
| Bot reply only | 4,713 | 34.6% | 12.9% |
| Human reply only | 13,385 | 37.4% | 9.1% |
| Both human and bot | 21,265 | 55.7% | 18.2% |
Silence is the strongest predictor of churn. Nearly three-quarters of newcomers who received no reply at all were gone within 90 days. Fewer than 1 in 20 were still posting a year later.
Any response — even an automated one — is better than nothing. A bot-only reply lifts 90-day retention by 7 percentage points over silence. But human engagement is the real lever: getting a human reply roughly doubles the chance of being around a year later compared to getting nothing.
The standout finding is that newcomers who received both a human reply and a bot reply retained at dramatically higher rates than those who received only a human reply (55.7% vs. 37.4% at 90 days). This partly reflects community selection — 44% of human+bot recipients posted in well-organized communities like OCD, Aliento, and Hive Learners, which have both active human curators and bot integrations, compared to only 11% of human-only recipients.
To control for this, I measured retention within the same communities:
| What happened | Weighted 90-day retention (within-community) |
|---|---|
| No reply | 20.8% |
| Bot only | 29.8% |
| Human + bot | 52.4% |
| Human only | 32.7% |
Even within the same community, the pattern holds: human+bot outperforms human-only by roughly 20 percentage points. The bot reply provides immediate acknowledgment — a badge, a token, a notification that something happened — while the human reply may come hours later. Together, they create a sense that the platform is alive and paying attention, and keep the newcomer around.
Controlling for post quality and rewards
Engagement correlates with two other things that predict retention: post length (a proxy for effort) and payout. Posts with 10+ human replies average $17.96 payout and 5,795 characters; posts with zero human replies average $0.15 and 1,471 characters. Are human replies just a marker for higher-quality posts that would have retained their authors anyway?
To test this, I measured retention by reply count within payout and post-length bands, holding the other variable roughly constant.
Within payout bands (90-day retention):
| Human replies | $0 payout | $0.01–$1 | $1–$5 | $5+ |
|---|---|---|---|---|
| 0 | 20.7% (n=18,632) | 34.8% (n=5,200) | 50.7% (n=672) | 53.1% (n=147) |
| 1 | 25.9% (n=7,538) | 39.5% (n=4,671) | 50.3% (n=981) | 54.1% (n=606) |
| 2–3 | 30.6% (n=2,206) | 41.5% (n=4,440) | 49.9% (n=1,701) | 58.6% (n=1,815) |
| 4+ | 32.8% (n=531) | 48.0% (n=2,322) | 54.4% (n=2,451) | 65.0% (n=5,487) |
Within post-length bands (90-day retention):
| Human replies | <500 chars | 500–2K | 2K–5K | 5K+ |
|---|---|---|---|---|
| 0 | 20.7% (n=11,166) | 28.0% (n=7,490) | 28.8% (n=4,503) | 25.3% (n=1,492) |
| 1 | 27.0% (n=5,243) | 35.8% (n=4,495) | 40.4% (n=3,045) | 35.7% (n=1,013) |
| 2–3 | 32.9% (n=2,190) | 42.7% (n=3,205) | 48.0% (n=3,360) | 51.6% (n=1,407) |
| 4+ | 39.9% (n=666) | 49.2% (n=2,001) | 56.5% (n=4,323) | 65.6% (n=3,801) |
Human replies independently predict retention after controlling for both payout and post length. Among newcomers who earned $0 — nearly half the dataset — getting one human reply still lifts retention by 5 percentage points, and getting 4+ lifts it by 12 points. Among short posts under 500 characters, the same pattern holds. Engagement is not merely proxying for content quality.
The interaction is also notable: replies matter more for longer, better-rewarded posts. Among $5+ posts, the gap between 0 and 4+ replies is 12 percentage points. Among 5K+ character posts, the gap is 40 percentage points. This suggests effort and engagement are multiplicative — a newcomer who writes something substantial and gets a response has the strongest signal that the platform is worth their time.
How many replies matter?
Each additional human reply lifts retention, but with diminishing returns:
| Human replies | 90-day retention | Marginal lift |
|---|---|---|
| 0 | 24.7% | — |
| 1 | 33.5% | +8.8pp |
| 2 | 41.2% | +7.7pp |
| 3 | 47.8% | +6.6pp |
| 4 | 49.7% | +1.9pp |
| 5 | 53.8% | +4.1pp |
| 6+ | 54–66% | +1–5pp per step |
The first three human replies do most of the work — from 24.7% to 47.8%, nearly doubling retention. After that, each additional reply still helps but the returns diminish. This is operationally important: a community doesn't need ten people to welcome every newcomer. Two or three substantive responses capture most of the retention benefit.
The Vicious Cycle
These findings connect to the retention flywheel we documented previously, which showed that retained users predict price (elasticity ~1.0) and price predicts acquisition. Engagement adds a new link to that loop.
The chain works like this:
Fewer active users → fewer humans available to reply → newcomers wait longer or get no reply → worse retention → even fewer active users
The correlations support each link:
- Active authors correlate positively with human reply rate (r = +0.18 at 3-month lag)
- Human reply rate correlates positively with 90-day retention (r = +0.31 contemporaneous, r = +0.30 at 3-month lag)
- Today's retention rate predicts future community size (r = +0.59 at 3-month lag in our previous analysis)
This is the mechanism through which the "death spiral" operates at the human level. The price flywheel (retention → price → acquisition → retention) is one loop. The engagement flywheel (community size → engagement → retention → community size) is another, nested inside it. When the community shrinks, both loops weaken simultaneously.
The Critical Mass Question
's motivating hypothesis was whether Hive needs a critical mass of active users to sustain the engagement levels required for healthy retention.
The data doesn't give us a clean threshold — the relationship between community size and engagement is continuous, not a step function. But the trajectory is concerning:
- In mid-2017, ~35,000 active posting authors sustained reply rates above 75% and median response times under 10 minutes.
- In 2021, ~11,000 active authors sustained reply rates of 68% and median response times of 42 minutes — worse, but functional.
- In 2024, ~8,000 active authors produced reply rates of 45% and median response times over 2.5 hours. More newcomers were met with silence than with a response.
- In 2025, active authors have dropped further to ~7,000-8,000.
The question is whether the current community is large enough to generate the engagement needed to retain newcomers at a rate that sustains — let alone grows — the community. With 45% of newcomers receiving no response at all in recent months, the data suggests the platform is below whatever threshold "enough" is.
The communities that have solved this problem — Aliento, OCD's newcomer program, Hive Learners — have done so by concentrating curation resources. They can sustain high engagement because they've built infrastructure to route newcomer posts to active human curators. But these communities serve a fraction of total newcomers. The long tail — users who post without landing in a well-organized community — face the 2024 experience: a coin flip on getting any response, hours of waiting if they do.
What Changed in 2025?
The partial recovery in 2025 engagement metrics deserves scrutiny. Reply rates climbed from their 2024 lows (28-45%) back to 56-68% in some months, and median wait times dropped below an hour.
Some of this is genuine: smaller cohorts are easier to serve, and several community initiatives have focused on newcomer engagement. But a significant portion is bot-driven — the gap between total and human reply rates reached 16 percentage points in 2025, the widest in the dataset. The risk is that the headline numbers look healthier than the underlying reality. A newcomer getting a Hivebuzz badge notification and a PizzaBot tip is not having the same experience as one getting a thoughtful comment from a human who read their post.
Implications
1. Engagement is the lever inside the lever. The previous post showed retention rate is the variable human effort can shift. This post identifies what within the newcomer experience most strongly predicts retention: whether another human responded to them. A 10 percentage point improvement in human reply rate associates with 2-3 percentage points higher retention — which, compounded through the price flywheel, amplifies further.
2. The engagement decline is a bigger problem than it appears. Reply rates have halved and response times have increased 10-20x since the Steem era. This isn't just a symptom of community shrinkage — it's an accelerant. Each unreplied newcomer who leaves makes the next newcomer's experience slightly worse.
3. Bots complement human engagement. Individual-level data shows bot replies are positive for retention — newcomers who received both a human reply and a bot reply retained at higher rates than those who received only a human reply, even within the same community. Bots provide immediate acknowledgment while human responses arrive later; together they create a more responsive welcome experience. The danger is not bots themselves, but bots replacing humans rather than supplementing them — which is what the 2025 data suggests may be happening.
4. Community infrastructure works, but doesn't scale to the whole platform. Aliento, OCD, and Hive Learners have built exactly the right thing — dedicated human engagement with newcomers. But they serve a fraction of new users. The challenge is either scaling those models or building platform-level tools that reduce the human effort required per newcomer welcome.
5. The critical mass concern is real. With ~7,000-8,000 active authors and dropping, the platform may be near the lower bound of the community size needed to sustain organic newcomer engagement. Communities that have concentrated their curation resources can still function above this bound locally, but the platform as a whole is below it.
Caveats
"Reply" means any depth-1 comment on the newcomer's first top-level post. This captures direct responses but not deeper conversation threads. A one-word reply counts the same as a substantive paragraph. The metric measures whether someone showed up, not the quality of the interaction.
Bot classification is approximate. I classified ~40 accounts as bots or automated services based on known behavior patterns. Some borderline cases exist — accounts like
and
post formulaic welcome comments but may be human-operated. Misclassification of a few accounts doesn't materially affect the results because the human/bot split is measured at scale (59,000+ newcomers).
The individual-level retention analysis covers 2021-2023 only. This window was chosen to ensure enough time had elapsed for 1-year retention to be measurable. Results may differ for earlier eras when the platform dynamics were different.
Community selection is a real confounder in the individual-level analysis. Newcomers who post in well-organized communities are both more likely to receive human+bot replies and more likely to be higher-quality users. The within-community analysis controls for community membership, and the payout/post-length stratification controls for observable post quality, but unmeasured individual characteristics (prior social connections, motivation) remain uncontrolled.
The regression model explains 45-58% of variance. Substantial variation in retention remains unexplained by the four factors modeled. Prior social connections, economic motivation, and other unmeasured factors all likely contribute.
Post length is a crude proxy for effort. A 10,000-character post could be a thoughtful essay or a copy-pasted template. The controlled analysis uses raw character count, which doesn't distinguish between these. Similarly, payout reflects both post quality and curation luck — some high-quality posts earn nothing because no curator found them.
Median time-to-first-reply has high variance between months. Individual months can show spikes (e.g., 717 minutes in February 2017) that reflect data artifacts or unusual events rather than underlying trends. The era-level summaries are more reliable than individual month values.
Correlation between community size and engagement doesn't establish a clean critical-mass threshold. The relationship is continuous and confounded by era-specific factors. The "critical mass" framing is a useful heuristic, not a precise empirical finding.
This analysis does not address engagement quality — whether comments are substantive, in the newcomer's language, or responsive to their content. The community density post found that in-language, personal engagement (as in Aliento) is more effective than generic welcome messages. A full quality decomposition would strengthen these findings but requires natural language processing beyond the scope of this study.
Data: HiveSQL, queried May 2026. Time-to-first-reply: 117 monthly cohorts (April 2016 – December 2025). Retention model: 103 monthly cohorts with complete data. Individual-level analysis: 59,400 newcomers from 2021-2023 cohorts, with post length, first-post payout, and per-user human/bot reply counts. Bot classification: ~40 accounts identified as automated services. All queries bounded by date and designed for minimal HiveSQL load. For the full retention series, see 's collection post.