You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For each reconciliation the operator systematically performs some bcrypt operations to ensure Elasticsearch users passwords hashes are up-to-date in the file realm. These cryptographic operations can have a significant impact in cpu constrained environments:
(in the example above I added the LRU cache and a couple of additional spans)
The question is to find a good cache size, it can be the max-concurrent-reconciles value (with a multiplier?) or inferred from the available memory (a bit involved maybe).
The text was updated successfully, but these errors were encountered:
For each reconciliation the operator systematically performs some
bcrypt
operations to ensure Elasticsearch users passwords hashes are up-to-date in the file realm. These cryptographic operations can have a significant impact in cpu constrained environments:The use of
bcrypt.CompareHashAndPassword
is linearly proportional to the number of users managed in the file realm. I think there are 3 of them by default (not counting the ones generated for some associations). Using a LRU cache like github.com/hashicorp/golang-lru can significantly improve the situation:(in the example above I added the LRU cache and a couple of additional spans)
The question is to find a good cache size, it can be the
max-concurrent-reconciles
value (with a multiplier?) or inferred from the available memory (a bit involved maybe).The text was updated successfully, but these errors were encountered: