Disclaimer: This blog post is for informative purposes only. I am an active Hive developer and my intentions are not malicious towards any Hive community nor specific project.
Inspiration
Some time ago started a thread on Hive Mattermost asking about ways to handle
app data
I wanted to answer his question briefly: Just use custom_operation - it can carry binary data pretty well and is not visible to our frontends., but then asked an even better question:
which actually made me wonder - is it really possible to do so?
(TL;DR yes)
Backbone of Hive
Starting my work at Hive, the first thing I did was block log analysis - that's right - I manually decoded the entire block log scheme (but the testnet one of course 😉). Here are the results:
So the block you are looking at is of 248B size. But have you ever wondered what are the actual constraints on the block / transaction sizes?
Block constraints
Most of you already know the Hive block creation interval, which is 3 seconds, but what about the block size?
The minimum block size, hardcoded into the Hive config is 64 kiB (kiB, not kB!) minus 121B required for the block metadata & signature. This used to be true, but the setting is now stale (unused) and refers instead to the maximum block size, configured by Hive witnesses.
The theoretical maximum Hive block size is 2 MiB, but we never actually got into that number. So the current median maximum block size set by Hive witnesses is 64 kiB minus the magic 256 number plus required block metadata (mentioned 121B) = 65401 B
- Why not magic number 128B (the power of 2)? - it is closer to the actual 121 B.
- Block can also hold a dynamic list of extensions, with the largest ones being 8 B (
hardfork_version_vote). This doesn't mean we can't exceed the 256 B block metadata size threshold, but it is especially hard as they are only used to signal blockchain hardforks by witness nodes, which happens rarely.
Current REAL block size constraints: min. 121 B - max. 65401 B
Transaction constraints
There is no predefined transaction minimum size (even though there is a constant in the Hive config referring to 1 kiB). The transaction itself has to carry some data, including ref block, and signature which results in 79B of serialized data size.
But wait! We are required to have at least one operation inside the transaction! This results in the theoretically smallest possible transaction containing one signature, and one decline_voting_rights_operation (min. 6 B serialized, including op type, account name, and boolean flag) in total size of 85B.
A mysterious Hive core dev brought up an idea for even smaller transactions - specifically, signatureless ones! How does that work? The Hive blockchain has certain accounts with "null authority", such as
temp. Because of this, the account can execute operations without requiring a signature, it yields the absolute minimum transaction size withdecline_voting_rights_operationof exactly 20B.
So we have the minimum transaction size. What is the maximum size? The maximum Hive transaction size used to be 64 kB (based on the HIVE_MAX_TRANSACTION_SIZE config available in Hive's source code), but thanks to the great conversation between two of our core devs, it is now dynamic 🤭. This means that for the maximum transaction size we have to subtract 256 from the maximum block size defined by the witnesses (65280 B).
Current REAL transaction size constraints: min. 20 B - max. 65280 B
Web technologies limitations
Modern web pages usually serve a lot of content (Denser, for example, has to serve ~7 MB, PeakD ~10 MB, Ecency ~31 MB).
Benchmarked /trending webpage.
Binary files problem
We already have a "decentralized" image hosting service, so we can externalize some of the content that takes up the vast majority of the downloaded content size - images and GIFs.
So we are left with a few file types: HTML, CSS, JS, WASM, and fonts (assuming we are only using the HAF REST API and JSON-RPC Nodes legacy API for blockchain communication).
Fonts are not that critical (assuming we are using native Unicode emojis and SVG for icons), so we can use fonts from the Google Fonts CDN or the default ones.
WASM can be served via library packages using services such as unpkg.com - but it is not truly decentralized at that point.
Web2 technologies on Hive
We are left with HTML, CSS and JS by this point.
We already know that currently the maximum data we can store on Hive at once is 65280 B, but that is still not entirely true.
So there are 3 main ways for persistent storage on Hive:
- Hive posts / comments (
comment_operationbody - any utf8 compatible format + json metadata) - Custom jsons (must be json)
- Custom operations (can be anything - binary). I mean
custom_operation.custom_binary_operationis deprecated (never used in the blockchain and temporarily blocked by the softfork).
Of course there are many other ways (such as transfers' memo or accounts' json metadata and posting json metadata), but those are highly inconvenient, and strictly limited regarding their size and format, e.g. mentioned memo - 2 kiB max, and json metadata limited to the single transaction size - ~63 kiB.
Hive posts case
There is no real limitation on the post / comment body and json metadata size. We are only limited by the transaction size. This means that theoretical size is 65280 B minus transaction metadata (79 B) minus comment_operation metadata (author, permlink etc. - ~22 B when optimized for size) = 65179 B.
For the maximum operation frequency, we can look at the Hive evaluator for social-related operations - one post per 5 minutes.
Summary: max data size: 65179 B. Max frequency: 5 minutes
And let's be real - writing multiple Hive posts would drain your RC mana in the blink of an eye 👀.
Hive comments case
A Hive comment behaves similarly to a Hive post, but we can write comments more often - once every 3 seconds - and have to provide a parent author (-5 B) = 65174.
Summary: max data size: 65174 B. Max frequency: 3 seconds
Custom JSONs case
Based on the Hive evaluator code, the maximum data size of custom_json_operation as of HF26 is 8 kiB (HIVE_CUSTOM_OP_DATA_MAX_LENGTH value).
Theoretically, we are restricted to the JSON schema, but this can be leveraged to store JS files inside the JSON, which will later be read and eval-ed by other JS files.
There is also a frequency limitation: for a single Hive account, you can publish up to 5 custom operations per block
Summary: max data size: 8192 B. Max frequency: 5 ops / block (~3s)
This means you would have to create JS/CSS chunks of 8 kiB each. But hey - at least you would not drain all of your RC mana 😅.
Custom operations case
The custom operation behaves similarly to the custom JSON, but allows saving data of any kind (binary).
Summary: max data size: 8192 B. Max frequency: 5 ops / block (~3s)
Note: 5 ops / block limit refers to no. of
custom_operationandcustom_json_operationin block combined!
Chunk concatenation
So let's assume we have everything set up and already published to Hive. Now what if we want to add new features, or simply update just one file? This would require broadcasting all the files again to the network, wouldn't it?
Well, we could either:
- Write in vanilla HTML/CSS/JS and maintain all the files manually, splitting them into chunks due to maximum transaction size limitations, and update only those which we edited.
- Create a customized bundler with chunk size limitation and automatic on-Hive publisher script.
For full automation, data redundancy reduction, and ultimately for people on Hive to actually use this method, it would at some point REQUIRE writing the mentioned bundler and script.
| Operation type | Max chunk size [B] | Broadcast interval for single account [s] |
|---|---|---|
| Post | 65179 | 300 |
| Comment | 65174 | 3 |
| Custom JSON | 8192 | 0,6 |
| Custom binary | 8192 | 0,6 |
THE Actual Implementation
Okay, everything's good, we have chosen our hosting provider (Hive blockchain), we have everything set up, HTML, JS, CSS and all other CDNs are in place, created by our custom bundler, but wait! How are we even going to access our freshly cooked (vibe coded) web page?
API for static HTML webpages
Currently, there is no API for serving static HTML/JS/CSS content with proper MIME type on Hive. This could be done on top of HAF as a plugin. Ironically that would be the easiest part of the puzzle, so let's skip to the next problem ASSUMING we already have it implemented under https://api.openhive.network/blog-posts-api 😋
DNS limitations
First of all, we have to make our website publicly available to the broader audience. Usually it is pretty simple - we have hosting, we have a domain registrar with a DNS records manager set up.
What remains is connecting a domain to an IP (A/AAAA record) or another domain (CNAME record). Pretty easy stuff - just a one-liner.
But wait! What are we actually going to point our domain to? A Hive node? But that will result in:
and we want our web page!
So what about pointing to a specific URL?
The Solution: Cloudflare Workers
You might be tempted to use simple URL redirects, but redirects are terrible for SEO. A 301 or 302 redirect tells Google that your content actually lives at api.openhive.network, meaning the API gets all the "SEO juice" (backlinks, authority) while your domain acts as an empty transit tunnel.
Instead, we can solve this using Cloudflare Workers. By setting up a simple Worker script, it acts as a serverless reverse proxy. When someone visits myapp.com, the Worker silently fetches the HTML from the OpenHive API under the hood and serves it directly to the user. To the browser and to Google's crawlers, the content natively belongs to myapp.com. The original API URL remains completely hidden, you keep 100% of your SEO rankings, and you don't even need to maintain your own server!
Conclusion
Sooo.. yes, technically speaking it is POSSIBLE to create such a service, but maintaining your web page directly on Hive using any kind of L1 operations would essentially be a pain in your a🍑🍑.
Thanks for reading this article. If you have any questions, feel free to comment! I'd love to see someone (suffer) actually implement(ing) this idea 🤤! (No AI images used)