The Hive Matrix Explorer started as a simple terminal style experiment using JS, direct RPC connection and raw stream of blockchain activity since i dont have it anymore due pixbee shutdown.
We wanted it lightweight, very dev core energy. And honestly the terminal version is still my favorite.
You can even run it locally in the browser and watch the chain move in... It also fun... and A few non dev friends saw it but they were into it.
That’s when we thought... lets make this a bit more inclusive and make it easier for normies to open and explore.
But simple changes sometimes create real problems.
The RPC Problem
How are we going to release it without overwilming hive RPC? Just in case it goes viral and 1000 users open it, its going to open 10000 RPC connectioins to stream? We didnt want that.
We just wanted to release it publicly without accidentally stressing Hive RPC infrastructure.
I hope we did the right thing, and Hive witness can tell us if its the right approach.
The Other Problem: Memory
In our first tests the browser tab went above 1.5GB RAM.
We were just pushing operations into state forever. No limits. No trimming. Just pure stream. It worked.. but not.
So we had to implement cropping and smart buffering and only keep what is needed for display, trim old operations to avoid keeping the full chain in memory.
After optimizing it properly, the app stays under 100MB RAM even after more than 2 hours running.
Solutions?
Approach 1: The "Browser-Only" Vanilla JS Implementation
Where it started but identify the too many connections issue.
But, If you want the absolute simplest way to stream Hive data, you can do it without compiling, building, or running a Node backend.
How it works
By importing the dhive library directly via a CDN you can tap into the Hive blockchain directly from the user s browser.
// index.html
<script src="https://unpkg.com/@hiveio/dhive@latest/dist/dhive.js"></script>
<script src="script.js"></script>
// script.js
const client = new dhive.Client([
"https://api.hive.blog",
"https://api.deathwing.me"
]);
const stream = client.blockchain.getOperationsStream();
stream.on('data', (op) => {
const type = op.op[0];
const data = op.op[1];
console.log(`New operation of type: ${type}`, data);
});
So you have a Zero Backend that you can host on GitHub Pages. There is no server to maintain. Using Direct Connection The user's browser connects directly to the Hive RPC nodes.
When a user first opens the page, the screen is empty. They have to wait up few seconds for the next block to be minted before seeing any data.
Processing heavy streams of data on mobile devices can drain batteries and cause UI stuttering if not carefully virtualized.
Approach 2: The Next.js App Router
To solve the empty initial state and build a more robust application, we pivoted to Next.js framwork using a Hybrid SSR + SSE.
However, moving real time streams to a serverless SSR environment like Next.js requires a paradigm shift. If you just fetch data on the server, you only get a snapshot. If you just fetch data on the client, you're back to the empty initial state problem.
So We implemented a middle ground:
The Server Buffer: The Next.js framwork using backend establishes a single
@hiveio/dhivestream and maintains a buffer of the last operations in memory.Hybrid SSR: When a user navigates to the page, the framework executes a Server Component that grabs the current cache and injects it into the initial HTML resulting in quick loading layout shifts.
Server-Sent Events (SSE): Instead of the client polling the API every second, the client establishes an EventSource connection... as the server receives new Hive block data it pushes the data down via the open HTTP connection.
The Implementation details
The Backend (app/api/hive/route.ts):
import { startHiveStream, getHiveBuffer, clients } from '../../lib/hiveStream';
export const dynamic = 'force-dynamic';
export async function GET(req: Request) {
const stream = new ReadableStream({
start(controller) {
clients.add(controller);
// Instantly send the historical buffer
const buffer = getHiveBuffer();
controller.enqueue(new TextEncoder().encode(`data: ${JSON.stringify(buffer)}\n\n`));
req.signal.addEventListener('abort', () => clients.delete(controller));
},
cancel(controller) {
clients.delete(controller);
}
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Connection': 'keep-alive',
},
});
}
The Frontend (Client Component):
export function useHiveLive(initialOps: HiveOp[] = []) {
const [ops, setOps] = useState<HiveOp[]>(initialOps); // Initialize with SSR data!
useEffect(() => {
// Subscribe to real-time updates seamlessly
const source = new EventSource('/api/hive');
source.onmessage = (event) => {
const newOps = JSON.parse(event.data);
setOps(prev => [...prev, ...newOps]);
};
return () => source.close();
}, []);
return { ops };
}
So we expected Users get quick data on load, followed by seamless, lightweight socket-like streaming.
If 1,000 users open the app, you only have ONE connection to the Hive RPC nodes. The server distributes the data to the clients, saving the decentralized network from being spammed with identical requests.
So, If you're hacking together a quick viewer or a static dashboard, dropping dhive in a <script> tag is incredibly satisfying.
But if you are building a production grade Web3 application where performance, and UX matter, wrapping @ hiveio/dhive in a Next.js Server-Sent Events architecture gives you the ultimate control.
Check out the repositories and start streaming the Hive blockchain!
https://github.com/rferrari/hive-matrix-viewer
Skatehive Team:
http://hive-matrix-viewer.vercel.app/team