I am working on the Merg-E Domain Specific language for the InnuneDo stack, and I recently got the domain merg-e.xyz. Right now if you go to www.merg-e.xyz, you get immediately redirected to a blog post by me on hive.blog, what is great, for now, but it doesn't really feel super profi. If you go to lang.merg-e.xyz you go to a different blog post on hive.blog. Then I remembered an old project of mine, a really old project of mine, and I started thinking.
Ancient history: Capibara Free Domain project
Many decades ago I ran a tool called the Capibara URL Cloaking Kit or C-Duck. You can still find the ancient mostly Perl code that I put on github a decade ago here.
So what was cDuck? It was an early distributed web cloaking service and a successor to the Capibara Free Domain project, a simple URL cloaking site that early internet users with a website on sites like geocities that gave people a free website with a tiny bit of storage. The Capibara Free Domain project basically let the user point a CNAME DNS record at the www.capibara.com website and a simple CGI script, because that was how things worked in the mid ‘90s, would return a simple HTML page consisting of either a single frame web page cloaking, or, in order to fund the service traffic cost, an occasional second frame with an advertising banner.
Then In 2001, the hosting provider where the service was running, a US based company named Ijnt went belly up because of the dot-com bubble and I failed to find affordable alternative hosting.
C-Duck
By 2002, ADSL was becoming a fast (1 Mb/s, remember in the late `90s we were using modems with speeds up to 56 kb/s) but not always too reliable form common form of home internet access, and the C-Duck project tried to fit that new reality and revive the Capibara Free Domain project.
So what was C-Duck, the Capibara URL Cloaking Kit ? It was a collection of mainly Perl scripts that tried to safely and reliably re-imagine the Capibara Free Domain project as a distributed system running on low-QOS home-grade DSL connections. It consisted of a config fetcher, a trivial DNS server, a trivial web server, a trivial intrusion detection system, and a setup script that made C-Duck run as multiple daemons each with its own user id and its own set of subsystem firewall rules. It was quite a hardened little system for 2002 standards, especially for 2002 home-server standards. The idea was, you don't want to be a full time sysadmin to safely run a C-Duck node.
By 2002, the Capibara Free Domain project had been offline for more than a year, and after e-mailing all of the former users, more than two third had already found other options for their domain or had decided to just let their domains expire. The bulk of the remaining domains were free 'op.nu' domains that I had been giving away for free. One big user of the former Capibara Free Domain Project made up a large chunk of the remaining non op.nu domains, and he was interested in running a C-Duck node, and a few others joined also.
So how did the distributed system work? Well for starters, every node had a config file that looked something like this
server ns1 123.123.123.123
include http://www.xs4all.nl/~rmeijer/cduck_example/exampleinclude.conf
map * http://www.xs4all.nl/~rmeijer/cduck_example/wildcard.map_ 40000
zone example1.com http://www.xs4all.nl/~rmeijer/cduck_example/example1.zone 100
map example1.com http://www.xs4all.nl/~rmeijer/cduck_example/example1.map_ 1000
This file would tell the node what all the servers in the distributed setup were, and where it could find all the URL mapping info, plus some other info needed to run the node.
The node would fetch these files and start running its tiny little DNS server and tiny little web server.Meanwhile a tiny little IDS was monitoring networking and logs to pull the plug on the server if needed to.
That's me DNS
The C-Duck DNS server was simple. Ask it for SOA or NSand you would get some proper info, including on other DNS servers. Ask it about some other records, and it for example an MX was in a downloaded zone file it would answer it appropriately, but ask it for an A record, it basically respond with a "That's me" answer. Always return its own address, and always with a short TTL so that when an ADSL connection would go down shortly, or if the system was rebooted for whatever reason, or down for maintenance or because there was no other wall socket available when the operators wife needed to vacuum, then the DNS server not being available would effectively take the unavailable node out of the pool until it had started again or the line had become active again. Connection failure was quite common in those days as ADSL used PPTP connections to the ADSL modem, and reliable reconnection scripts were not really available.
In effect the That's Me DNS provided a low-tech combination of dynamic load balancing and fail over.
The micro webserver.
Then the micro webserver would do the rest. Just like with the Capibara Free Domain Project it would output an appropriate single frame web page, cloaking a redirect to the actual web page.
HIVE
It's 2026 now, 24 years since the first release of the now long abandoned C-Duck project. So why am I talking about C-Duck? Well, there are some interesting similarities today. Like Geocities in the past, HIVE blog posts are like personal web pages. Some of these blog posts, or collections of blog posts may be like websites in their own right, long URLS, so not something to put on a business card or on a slide in a powerpoint. If I give a talk on Merg-E, I would like to put www.merg-e.xyz on my last slide instead of hive.blog/@pibara/very-long-blog-name.
We could make another C-Duck akin system, but it isn't 2002 anymore. The tech stack has changed very much. Frames are no longer good practice and are even considered malicious by many, dApps are running on multi gigabit lines at least or in data centers with connectivity that make a 1Gb/s home fiber connection feel like those 56kb/s modems from 1999. And very importantly everything is SSL/TLS now, and the future is Web 3.0; blockchain, not silly config files that weren't even JSON back then because hardly anyone even knew it was invented.
I think a new reimagined C-Duck for HIVE that is dApp centric but not dApp specific could provide great value to both the community and to teams running dApps like ,
or hive.blog (
?), who could make a little extra HIVE or HBD from running a hypothetical Hive-Duck node next to their dApp in a way that makes it feel like part of the actual dApp.
Imagine a C-Duck like infrastructure where the nodes aren's old Pentium IIIs running over a 1Mbps flaky ADSL connection in someone's basement, but where each node is part of a distinct HA dApp infrastructure, making the whole setup a doubly distributed powerhouse.
Registrations through user level custom_json, registrar DNS, and a registration bot
I imagine HIVE-Duck starting off with one bot and a custom_json type monitored by that bot. Imagine I want to run my new domain on the hypothetical Hive-Duck. The first thing I do is add a TXT record to my domain in the DNS config of my domain registrar. I imagine the following convention:
hiveduck.merg-e.xyz TXT "@pibara"
All I really say here is: Hey HiveDuck nodes, I'm about to register my ownership of merg-e.xyz as the HIVE user . Please take note.
So I have the TXT record, but none of the Hive-Duck bots have any idea of that. So I post a custom JSON notifying all the bots to please start taking note of my merg-e.xyz domain, and to keep an eye open for mappings posted to my on-chain account json.
All the custom json basically need to say is: Hey, check out the TXT record for my domain, I'm going to be administrating it in my account level json soon. I propose that this action triggers that all Hive-Duck nodes start serving just a SOA and TXT record for this domain, for free, so that the user can change the DNS servers with their domain registrar without any chicken egg problems.
Finally the user will extend his account level json with something that tells any node where they want stuff to map to within their HIVE account.
I imagine an embedded bit of JSON looking roughly like this:
{
"merg-e.xyz" : {
"A:" : "merg-e-a-different-type-of-least-authority-language-and-runtime-top-level-overview",
"A:www" : "merg-e-a-different-type-of-least-authority-language-and-runtime-top-level-overview",
"A:lang" : "version-03-of-the-merge-e-language-specification--files-merging-scoping-name-resolution-and-synchronisation",
"MX:": ["some-long-name.isp.nl"],
"NS:": ["ns.hive.blog", "ns.peakd.com"]
}
}
One time payment
While the DNS info is published, the nodes are still under zero obligation to start serving them, not even if their node was explicitly listed. My proposal is they do so after a one time payment, using a memo value to signal to the bot that it should register the payment as being towards the hosting for the given domain:
HIVEDUCK:merg-e.xyz
If the dApp owner sets a config value in HIVE and/or HBD, the bot can keep track of that and add the user's DNS to the config of the Hive-Duck node.
No frames but a proxy
As stated, frames were 2002, it is 2026 now and a proxy will be a much better option for the web bit of the Hive-Duck node. The proxy will do just a little bit of rewriting of the request URI's.
If the / is requested, both the username and the relative path from the config JSON are added. If the URI contains no @USER part, the owner's @USER part is inserted. Any other URI will just go to the server unchanged.
SSL challenge: getting N certificates for N nodes ?
Now comes the hard part. A part where I don't have an answer for yet. Each node can control the DNS answers it gives, but not the DNS answers the other dApp's nodes give. But it is 2026 and we need x509 certificates for all of the domains. Each node needs them. Right now I'm envisioning the user needing to manually set a master and changing it for each dApp when he wants that dApps node to go through the (automated) trouble to obtain an x509 certificate.
{
"merg-e.xyz" : {
"A:" : "merg-e-a-different-type-of-least-authority-language-and-runtime-top-level-overview",
"A:www" : "merg-e-a-different-type-of-least-authority-language-and-runtime-top-level-overview",
"A:lang" : "version-03-of-the-merge-e-language-specification--files-merging-scoping-name-resolution-and-synchronisation",
"MX:": ["some-long-name.isp.nl"],
"NS:": ["ns.hive.blog", "ns.peakd.com"],
"HIVE-DUCK-ACME-MASTER": "ns.hive.blog"
}
}
This mode should be temporary and will let other nodes temporarily proxy DNS to the HIVE-DUCK-ACME-MASTER. The ACME master should then go through ACME discipline to obtain an SSL/TLS x509 certificate for the domain.
It might be possible to come to a P2P protocol between the nodes to make x509 acquirement more convenient for the user.
Just musings, for now
My current todo list with the whole InnuenDo stack is already massive. So while I would absolutely be able to build this, it is not likely that I will any time soon. Although when I finish the Merg-E language, building it purely in Merg-E could be a great intermediate demonstrator before continuing into the w3minorfs L2 that is far away on my roadmap, just to see how strong of a language and runtime Merg-E really is. But right now Merg-E doesn't even have a complete semantic lexer yet, the CoinZdense port to the C++ libcoinzdense has only just begun, as has the code move and refactoring of aiohivebot into aiow3. As such any Hive-Duck implementation is probably years away if I'm going to have to do it.
If anyone wants to implement it before then, I'll be available for consulting and (if you implement it in Python, C++ or Rust) for code reviews, but if not, if the dApp crowd is interested, I'll keep it as a low priority project on my roadmap. So if you are part of any of the dApp teams, let me know what you think. If anyone wants to turn this idea into a DHF proposal and it leads to funding, great, go for it, if you plan to implement it in Python, you can reach out to spend some of that funding on me. I can spare up to 4 hours a week on paid projects, but you can probably find cheaper devs than me to work on it, but by all means, reach out about these 4 hours if you manage to find funding and want me onboard. It is January, and I can only make myself available more hours once a year before the new year begins, so no DHF for me at this point. For me it will just be one of the 20 or now 21 projects on my backlog for now. As I said, musings for now. Free for anyone to do with what they like, or to tell me they like me to keep it on my list so I might pick it up in a few years when my high priority InnuenDo projects have sufficiently advanced to implement this project in Merg-E.