You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe
Currently, we are storing our remote cluster state in a global metadata file and index metadata files for each index and maintain all the info in manifest file. As the cluster grows and cluster state size grows with more usage, the global metadata size also increases.
Whenever we trigger a cluster state update, we need to write the updated metadata on remote. If the global metadata file size has increased and the incoming change is also in global metadata file, we will upload the whole file again, for a small change of settings as well, thus increasing the cluster state update latency.
Describe the solution you'd like
We propose that we split the global metadata file into following following components:
This way if only a setting is modified, we don't update other files. If multiple files need to updated, those are updated in parallel, which is again better than upload full file.
This change will help in majorly decreasing cluster state update latency in big clusters.
Related component
Cluster Manager
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
[Triage - attendees 12345] @shiv0408 Thanks for creating this issue; however, it isn't being accepted due to its not being clear what the problem is or how this is addressed. Please feel free to open a new issue after addressing the reason.
Is your feature request related to a problem? Please describe
Currently, we are storing our remote cluster state in a global metadata file and index metadata files for each index and maintain all the info in manifest file. As the cluster grows and cluster state size grows with more usage, the global metadata size also increases.
Whenever we trigger a cluster state update, we need to write the updated metadata on remote. If the global metadata file size has increased and the incoming change is also in global metadata file, we will upload the whole file again, for a small change of settings as well, thus increasing the cluster state update latency.
Describe the solution you'd like
We propose that we split the global metadata file into following following components:
This way if only a setting is modified, we don't update other files. If multiple files need to updated, those are updated in parallel, which is again better than upload full file.
This change will help in majorly decreasing cluster state update latency in big clusters.
Related component
Cluster Manager
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: