Skip to content

Latest commit

 

History

History
27 lines (15 loc) · 1.83 KB

details.md

File metadata and controls

27 lines (15 loc) · 1.83 KB

Common issues when running Blockbook or implementing additional coins

Out of memory when doing initial synchronization

How to reduce memory footprint of the initial sync:

  • disable rocksdb cache by parameter -dbcache=0, the default size is 500MB
  • run blockbook with parameter -workers=1. This disables bulk import mode, which caches a lot of data in memory (not in rocksdb cache). It will run about twice as slowly but especially for smaller blockchains it is no problem at all.

Please add your experience to this issue.

Error internalState: database is in inconsistent state and cannot be used

Blockbook was killed during the initial import, most commonly by OOM killer. By default, Blockbook performs the initial import in bulk import mode, which for performance reasons does not store all data immediately to the database. If Blockbook is killed during this phase, the database is left in an inconsistent state.

See above how to reduce the memory footprint, delete the database files and run the import again.

Check this or this issue for more info.

My coin implementation is reporting parse errors when importing blockchain

Your coin's block/transaction data may not be compatible with BitcoinParser ParseBlock/ParseTx, which is used by default. In that case, implement your coin in a similar way we used in case of zcash and some other coins. The principle is not to parse the block/transaction data in Blockbook but instead to get parsed transactions as json from the backend.

Data storage in RocksDB

Blockbook stores data the key-value store RocksDB. Database format is described here.