-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: use spdk mempool per-core cache for io objects pool #1612
Conversation
Signed-off-by: Diwakar Sharma <diwakar.sharma@datacore.com>
With some simple fio run locally on malloc'ed pool - 4k bs, 8 fio jobs, 32 io depth, randwrite workload.
|
hmm I see slightly slower casperf performance with single-core null device, example: |
I wouldn't expect this to show improvements with single core because the caching is un-necessary in case of single core. Since cache holds 512 objects, there will be more cache misses. With multi-core, it'll benefit to avoid threads dipping their hands into common pool and contending. |
Yes but seems it's decreasing single-core performance, or am I missing something? |
hmm, I'm unclear what's our most reliable benchmark. I ran the same test that I did earlier, now on a single core io-engine instance(3 runs) and I see this:
|
Also , theoretically I would think that read workloads should see more improvements because in general reads have lower path length which would mean cache objects will be returned back quicker and hence lesser chances of dipping into common pool. |
With fio I seem to get consistent results for multi-core, always ~10k IOPS more with the cache, at a cost of 3x2MiB hugepages (4 core config), so seems the tradeoff is worth it! |
bors merge |
Build succeeded: |
No description provided.