This guest post was written by Ross Dold of EOSphere. Learn more about EOSphere’s work in the EOS ecosystem at the end of this article.

Antelope Leap v5.0.0 was released almost a month ago and is now seeing adoption across the many Antelope based networks as node operators start to upgrade their production environments.

Leap v5.0.0 was designed to be more performant, efficient and reliable than prior versions which is excellent news for node operators, as even a marginal improvement can translate to massive gains across of a fleet of 100’s of managed nodes.

With this in mind the EOSphere team have documented our real world comparison of the improvements in CPU, Memory and Disk IO between Leap v4.0.4 and v5.0.0 in the article below.

Leap v5.0.0 CPU, Memory and Disk IO Performance

The following article was built from gathering statistics on one of the EOSphere EOS Mainnet Public Peer Nodes. This node was chosen as it’s in production and highly utilised with between 180–195 organic incoming public peers. Hardware configuration as below:

  • Ubuntu 22.04
  • Virtualised in KVM 7.2.0
  • 4 CPU Cores
  • 32GB RAM
  • 128GB SWAP
  • Drive 1 : OS & State : 256GB Enterprise NVMe
  • Drive 2 : Blocks : 4TB Enterprise NVMe (ZFS)

CPU

Below is the chart of monthly CPU usage, showing utilisation on v4.0.4 and then the upgrade to v5.0.0 on the 22/1/2024 (20h00).

KVM CPU Utilisation of EOSphere Public Peer Node

CPU utilisation immediately dropped from 85% on average to a normalised 60%.
This is excellent news for running multiple nodes physically, virtualised or in cloud environments. This could also mean that the traditionally configured max-clients peer limit of 200 could be extended to 250 or even 300 for a public node.

Disk IO and Memory

If you have read any of our previous Antelope chain articles you will be aware that EOSphere have been advocates for running Leap nodes using the tmpfs strategy.

The tmpfs strategy involves running the nodeos chainbase database state folder in a tmpfs mount, allowing us to oversubscribe RAM with SWAP and allow more efficiency with memory utilisation and disk IO.

tmpfs is a Linux file system that keeps all of it’s files in virtual memory, the contents of this foder are temporary meaning if this folder is unmounted or the server rebooted all contents will be lost.

The challenge with using tmpfs being temporary all data is lost on reboot and nodeos will then require a restart via snapshot.

Leap v5.0.0 brings a new database map mode called mapped_privateas an alternate to the default mapped mode. Instead of the constant writing to disk with mapped mode, mapped_private mode better utilises memory and reduces disk IO. It does this by mapping the chainbase database into memory using a private mapping, which means that any chainbase data accessed during execution remains in memory and is not eligible to be written back to the shared_memory.bin disk file.

If that sounds familiar, it is. mapped_private is an excellent replacement for the tmpfs strategy. This means no need to mount a tmpfs partition and as the in memory chainbase data is written to disk on exit, there is no need to restart using a snapshot on reboot.

mapped_private configuration

Configuration of mapped_private involves simply adding the below to the config.ini

> nano config.ini
database-map-mode = mapped_private

In order to start nodeos mapped_private requires sufficient memory to cover the private mapping of the configured chain-state-db-size-mb = , physical RAM can be substituted with SWAP allowing over subscription.

At the time of writing 32GB of Physical RAM and 128Gb SWAP is sufficient to run an EOS mainnet node.

mapped_private operation and results

On the first nodeos mapped_private start up, the entire chainbase is uploaded to memory (RAM and SWAP) assuming you are starting with a snapshot and may take some time.

CPU and Memory Utilisation of mapped_private mode First Start

On nodeos exit the in memory chainbase is written to disk, this may take some time depending on how large it is.

Subsequent nodeos starts are faster not requiring a snapshot and as only data needed for execution is added to memory, displaying far less utilisation.

CPU and Memory Utilisation of mapped_private mode Second Start

Subsequent nodeos exits are also faster depending on how long the node has run, as mapped_private tracks dirty pages, only writing out dirty pages on exit.

There is also a slight improvement in memory utilisation compared to mapped mode.

CPU and Memory Utilisation of mapped mode

Other than RAM over subscription and lower utilisation, the real value in using mapped_private and the reason why EOSphere started using this mode in the first place is in far lower disk IO.

Performance requirements make it a necessity for operators to place the state folder containing the chainbase database on a high speed SSD drive. SSD drives have an endurance rating assigned to them by the manufacturer stating the maximum amount of data that may be written to the drive before failure. This is usually in TerraByte Writes (TBW), on a consumer disk this is usually between 150–2000TBW, on enterprise drive this value is usually in the Petabyte range. Essentially too many disk writes may wear out an SSD disk causing failure.

Below is the Drive 1 disk IO (Writes) of our example peer node using mapped mode, the network was seeing between 10–15 Transactions Per Second (TPS).

Drive 1 Disk IO (Writes) using mapped mode

And then this is was the Drive 1 disk IO (Writes) of our example peer node using mapped_private mode, with the network seeing the same 10–15 TPS.

Drive 1 Disk IO (Writes) using mapped_private mode

This demonstrates a massive reduction in the amount of writes using mapped_private.

Approximately 4 Megabytes (MB) per second down to 12 Kilobytes (KB) per second. That’s about 120TBW / Year reduced to 0.378TBW / Year.

This translates to SSD’s lasting longer, Virtual Environments scaling better and Cloud environments not being constrained by IO limitations.

In summary Antelope Leap v5.0.0 has lower CPU utilisation, a more efficient memory footprint and easily manageable lower disk IO when using mapped_private.

Be sure to ask any questions in the EOSphere Telegram and EOS Global Telegram.


This guest post was written by Ross Dold of EOSphere. EOSphere are a Block Producer & infrastructure provider on the EOS Mainnet as well as other Antelope based Blockchains. Learn more about their work at EOSphere.io and the links below.

Telegram | Medium | YouTube | Facebook | Twitter | Instagram


EOS Network

The EOS Network is a 3rd generation blockchain platform powered by the EOS VM, a low-latency, highly performant, and extensible WebAssembly engine for deterministic execution of near feeless transactions; purpose-built for enabling optimal Web3 user and developer experiences. EOS is the flagship blockchain and financial center of the Antelope framework, serving as the driving force behind multi-chain collaboration and public goods funding for tools and infrastructure through the EOS Network Foundation (ENF).

EOS Network Foundation

The EOS Network Foundation (ENF) was forged through a vision for a prosperous and decentralized future. Through our key stakeholder engagement, community programs, ecosystem funding, and support of an open technology ecosystem, the ENF is transforming Web3. Founded in 2021, the ENF is the hub for EOS Network, a leading open source platform with a suite of stable frameworks, tools, and libraries for blockchain deployments. Together, we are bringing innovations that our community builds and are committed to a stronger future for all.
EOS Website | Twitter | Discord | LinkedIn | Telegram | YouTube