Few days ago I posted about using Intel's Optane drives to cost effectively run STEEM workflows. Details about the primary tests : Using Intel Optane for STEEM blockchain seed node
Folks @ Intel and Packet have collaborated provide access to Optane based hardware for projects and they have considered STEEM blockchain community's equest & it looks like we / STEEM is on the way to become the first blockchain project to join the program !
To be honest, I was surprised by the above response which said they @ Intel had already seen my post mentioning Optane!
My request is here : requesting access to test STEEM blockchain
Details of the program
Intel & Packet have partnered to provide FOSS projects access to the latest Intel technologies. The communities can conduct performance benchmarks using this hardware. Running benchmarks using latest hardware is often difficult for Open Source Projects. So initiatives like AcceleartewithOptane can help to decide on the right architecture for production work loads.
We have access to very nice hardware for a limited period in which we have to complete the tests and if possible publish the results.
Server
Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz, 192 GB of RAM, Plenty of NVMe
--
NAME TYPE SIZE MODEL
nvme2n1 disk 3.7T INTEL SSDPEDKX040T7
nvme1n1 disk 698.7G INTEL SSDPE21K750GA
sda disk 223.6G INTEL SSDSC2KB24
sda2 part 1.9G
sda3 part 221.2G
sda1 part 512M
nvme0n1 disk 698.7G INTEL SSDPE21K750GA
nvme3n1 disk 698.7G INTEL SSDPE21K750GA
--
Packet's Management interface
The interface / dashboard looks more like project management tool and its refreshing. Though the best thing I liked about Packet so far is their support for BGP right there in the dashboard. (Spoiler Alert: My first job was at GoGrid/DataPipe/Rackspace & have written pretty much everything for a VM management tool starting from the SNMP polling)
The BGP management is of at-most importance to me as the BGP spoofing / poisoning / hijacking is the latest trend in town with pretty much everyone from Google to your next door ISP is a victim.
How will the testing done ?
My intention is to take inputs from the community and make it a collaborative effort where everyone can participate. Testing for replay times or RAM usage can be very easy and in the case of the "seed" & "witness" scenarios, this could be a good starting point. But when it comes to the full RPC nodes, the scenario changes as in addition to the memory and single core CPU intense workload, we will also have the incoming connections which will involve the kernel's network stack as well. To make matters little more interesting, we have RocksDB for storing parts of the blockchain. Since full RPC nodes do not just use the sequential access to the blockchain, which is going to be easier, we will have seek and parallel access to the RocksDB components making the scenario challenging.
Ideally it will be better to have few benchmark tools beyond the normal UNIX tools in the full RPC scenario.
Request everyone to share your thoughts to make this a truly community driven initiative.
The high level plan shared with Intel / Packet is as follows:
There are 3 scenarios when using Optane as storage in the blockchain domain (Which I think is going to be applicable for many other chains too) of which 2 scenarios can be tested in 1 week (5 x 24 hours) time frame. The third scenario however needs little more effort as it includes testing of RocksDB using benchmarks. ie, there are two areas namely the blockchain itself and the RocksDB based snapshot. After seeing the LMDB test results, I feel the 3rd scenario is going to be of much interest from the industry - not just the blockchain space but also from the in-memory database industry.
So in one week, we can test 2 of the following 3 scenarios:
- STEEM Blockchain seed node disk storage and RAM intensive workload
- STEEM witness node : Single Threaded (1 Core), RAM and disk intensive workload with highest possible network QoS
- STEEM Full RPC node: This is essentially a application server with STEEM blockchain stored in RocksDB. The workload is CPU, RAM, Disk and Network Bandwidth Intensive and will need 512GB RAM as of now. The RocksDB disk size is around 240 GB as I write this email. This configuration will take little more effort to test.
I also noticed you’re potentially interested in using the Intel Optane DC SSD with Intel Memory Drive Technology. I can set that up for you, but would you first want to test as storage, then as extended memory?
Yes, this was I was attempting for a while now. As you can see, the workload scenario 3 above needs considerable RAM and I was exploring a means to use SSD/Optane to find cheaper alternative for RAM. So this is the real objective and once the scenario1 and scenario2 are covered, we can plan to test this.
You’re able to compare performance of the Optane SSD to a NAND SSD on the same server, so if possible please do test and note the differences you find.
Yes, it’s possible. I had already done this (but not documented.)