Archive Node Size Unexpectedly Exceeds 9 TB Vs. Documented 2.1 TB In Erigon V3
Archive Node Size Unexpectedly Exceeds 9 TB vs. Documented 2.1 TB in Erigon v3
As a member of the Erigon community, we strive to provide the best possible experience for our users. However, we have encountered an issue that requires attention. In this article, we will discuss the unexpected behavior of an archive node running Erigon v3.0.2/3, which has resulted in a significantly higher disk usage than expected.
To better understand the issue, let's take a look at the system information:
- Erigon version:
v3.0.2 / v3.0.3
- OS & Version:
Linux
- Erigon Command (with flags/config):
erigon
- --chain=mainnet
- --datadir=/home/erigon/persistence/data
- --http.addr=0.0.0.0
- --rpc.accessList=/home/erigon/acl-config/acl.json
- --rpc.batch.limit=1000
- --rpc.txfeecap=100
- --http.api=eth,erigon,web3,net,debug,txpool,trace
- --http.vhosts=*
- --http.corsdomain=null
- --http
- --ws
- --db.pagesize=4096
- --ethash.dagdir=/home/erigon/persistence/dag
- --maxpeers=100
- --private.api.addr=0.0.0.0:9090
- --private.api.ratelimit=31872
- --rpc.returndata.limit=1500000
- --metrics
- --metrics.addr=0.0.0.0
- --healthcheck
- --authrpc.jwtsecret=/nimbus-data/jwtsecret
- --port=30303
- --db.size.limit=8TB
- --http.timeouts.read=300s
- --externalcl
- --prune.mode=archive
- Consensus Layer:
Nimbus v25.5.0
- Consensus Layer Command (with flags/config):
nimbus_beacon_node
- --data-dir=/nimbus-data
- --enr-auto-update=false
- --udp-port=9000
- --tcp-port=9000
- --rest=true
- --rest-address=0.0.0.0
- --rest-port=5052
- --metrics=true
- --metrics-address=0.0.0.0
- --metrics-port=8008
- --network=mainnet
- --history=archive
- --el=http://localhost:8551
- --jwt-secret="/nimbus-data/jwtsecret"
- Chain/Network:
Ethereum Mainnet
According to the Erigon documentation, the expected node size for an archive node is within 2-3 TB.
However, the actual behaviour is quite different:
# du . -h
310M ./logs
8.0T ./chaindata
69M ./temp
1.9T ./snapshots
50M ./downloader
219M ./txpool
24M ./nodes
2.1M ./diagnostics
9.9T .
As you can see, the actual node size is significantly higher than expected, with a total size of 9.9 TB.
To reproduce this behaviour, you can run Erigon with the flag --prune.mode=archive
and use debug/trace methods.
The Erigon documentation states an archive node size of ~2.1 TB. A ~10 TB footprint for ~2 weeks seems unexpected β more than 4Γ the documented figure.
Is this expected behaviour under current v3 with archive mode? If not, are there any stages or settings I should check for excess storage usage (e.g., chaindata not compacted, or unbounded index growth)?
In conclusion, the unexpected behaviour of the archive node running Erigon v3.0.2/3 is a concern that requires attention. We will continue to investigate this issue and provide updates as necessary.
If you have any additional information or logs that may be helpful in resolving this issue, please do not hesitate to share them with us.
Thank you for your work on Erigon! We appreciate your dedication to providing a high-quality experience for our users.
If you are experiencing similar issues or have questions related to this topic, please feel free to ask. We will do our best to provide assistance and guidance.
We will continue to monitor this issue and provide updates as necessary. In the future, we plan to investigate and address the root cause of this issue to prevent similar problems from occurring.
We value your input and feedback. If you have any suggestions or ideas on how to improve Erigon or resolve this issue, please do not hesitate to share them with us. We are always looking for ways to improve and appreciate your contributions to the community.
Q&A: Archive Node Size Unexpectedly Exceeds 9 TB vs. Documented 2.1 TB in Erigon v3
As a follow-up to our previous article, we have compiled a list of frequently asked questions (FAQs) related to the unexpected behaviour of the archive node running Erigon v3.0.2/3. We hope this Q&A section will provide valuable information and help you better understand the issue.
A: According to the Erigon documentation, the expected node size for an archive node is within 2-3 TB.
A: The actual node size is significantly higher than expected due to various factors, including but not limited to:
- Chaindata not compacted
- Unbounded index growth
- Other storage usage issues
A: No, this is not expected behaviour under current v3 with archive mode. The Erigon documentation states an archive node size of ~2.1 TB. A ~10 TB footprint for ~2 weeks seems unexpected β more than 4Γ the documented figure.
A: To reduce your node size, you can try the following:
- Compact chaindata
- Check and fix unbounded index growth
- Review and optimize other storage usage settings
A: To prevent similar issues from occurring in the future, we recommend:
- Regularly reviewing and optimizing your node settings
- Monitoring your node size and storage usage
- Keeping up-to-date with the latest Erigon releases and documentation
A: We are currently investigating the issue and working on a solution. We will provide updates as necessary.
A: If you have any questions or concerns related to this issue, please feel free to ask. We will do our best to provide assistance and guidance.
A: We will continue to investigate and address the root cause of this issue. We will also provide updates on our progress and any changes to the Erigon documentation.
In conclusion, we hope this Q&A section has provided valuable information and helped you better understand the issue. If you have any further questions or concerns, please do not hesitate to ask.
If you are experiencing similar issues or have questions related to this topic, please feel free to ask. We will do our best to provide assistance and guidance.
We will continue to monitor this issue and provide updates as necessary. In the future, we plan to investigate and address the root cause of this issue to prevent similar problems from occurring.
We value your input and feedback. If you have any suggestions or ideas on how to Erigon or resolve this issue, please do not hesitate to share them with us. We are always looking for ways to improve and appreciate your contributions to the community.