-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(pkg/trie/triedb): node level caching #4239
base: development
Are you sure you want to change the base?
Conversation
30419cd
to
941f91d
Compare
bea5ad3
to
1689aac
Compare
dddbecc
to
1ffda9a
Compare
1689aac
to
2b1bc71
Compare
Co-authored-by: Haiko Schol <539509+haikoschol@users.noreply.github.com>
be0dd1e
to
3b7fb2f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, seems solid for me, good work 🚀 !
var depth uint | ||
for { | ||
node, err := cache.GetOrInsertNode(hash, func() (CachedNode[H], error) { | ||
prefixedKey := append(nibbleKey.Mid(keyNibbles).Left().JoinedBytes(), hash.Bytes()...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest a comment here explaining how the prefix key is extracted from the nibbles
for { | ||
// Get node from DB | ||
prefixedKey := append(nibbleKey.Mid(keyNibbles).Left().JoinedBytes(), hash.Bytes()...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest a comment here explaining how the prefix key is extracted from the nibbles
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
Just some comments, one specifically regarding the performance of the iterator but it's not a blocker for this PR, we can create an issue and optimize it later if possible
// TODO: we still have some room to improve here, we are caching the raw | ||
// node data and we need to decode it every time we access it. We could | ||
// cache the decoded node instead and avoid decoding it every time. | ||
// This is the same as iterate_all_entries_without_cache since the raw iterator calls TrieDB.getNodeOrLookup | ||
b.Run("iterate_all_entries_with_cache", func(b *testing.B) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the iterator really using the cache? I'm getting the same results for this benchmark with / without the cache
Benchmark_NodesCache/iterate_all_entries_without_cache-12 24370 49277 ns/op 18688 B/op 2105 allocs/op
Benchmark_NodesCache/iterate_all_entries_with_cache-12 22998 49445 ns/op 18688 B/op 2105 allocs/op
I think you need to use the cache in the triedb.fetchValue
method too
@@ -24,7 +24,7 @@ type ( | |||
// InlineNode contains bytes of the encoded node data | |||
InlineNode []byte | |||
// HashedNode contains a hash used to lookup in db for encoded node data | |||
HashedNode[H any] struct{ Hash H } | |||
HashedNode[H hash.Hash] struct{ Hash H } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there no issues when we are using prefixed keys here?
Changes
CachedNode
andCachedValue
which is used in theTrieCache
interface. These are analogous types tocodec.EncodedNode
andcodec.EncodedValue
.CachedNodeAccess
to the recorder and is recorded accordinglyTrieDB
functions.TrieDB.GetHash
andtriedb.GetWith
functions that useTrieLookup
with supplied cache.Tests
go test -tags integration github.com/ChainSafe/gossamer
Issues
closes #4157