Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cat Nodes API with Protobuf #9097

Closed

Conversation

VachaShah
Copy link
Collaborator

@VachaShah VachaShah commented Aug 3, 2023

Description

The purpose of this draft PR is to create a new cat nodes API with protobuf as a serialization/de-serialization mechanism for node-to-node communication. We also benchmark the performance results in comparison to the original cat nodes API. This PR creates a separate path for the new API but when these changes are to be merged, a lot of code in the original API can be replaced with the newer approach. For the purpose of this POC, as of now the files are separate.

This PR represents the work done for the POC and is not be merged. The changes will be raised, reviewed and merged in incremental PRs.

Related Issues

#6844
#1287

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Commits are signed per the DCO using --signoff
  • Commit changes are listed out in CHANGELOG.md file (See: Changelog)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
…ream related functionality

Signed-off-by: Vacha Shah <vachshah@amazon.com>
…tegration

Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
…sAction

Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
…- mainly TransportService to try to fix multi node problem

Signed-off-by: Vacha Shah <vachshah@amazon.com>
…s commit

Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
VachaShah and others added 8 commits August 3, 2023 07:07
Signed-off-by: Vacha Shah <vachshah@amazon.com>
…rch-project#9073)

This commit refactors the following network and transport libraries to
the opensearch common and core libraries respectively:

* o.o.common.network.Cidrs -> :libs:opensearch-common
* o.o.common.network.InetAddresses -> :libs:opensearch-common
* o.o.common.network.NetworkAddress -> :libs:opensearch-common
* o.o.common.transport.NetworkExceptionHelper -> :libs:opensearch-common
* o.o.common.transport.PortsRange -> :libs:opensearch-common

* o.o.common.transport.TransportAddress -> :libs:opensearch-core
* o.o.common.transport.BoundTransportAddress -> :libs:opensearch-core
* o.o.transport.TransportMessage -> :libs:opensearch-core
* o.o.transport.TransportResponse -> :libs:opensearch-core

The purpose is to reduce the change surface area of the core APIs to
minimize impact to downstream consumers while moving toward establishing
a formal API for cloud native or serverless implementations.

Signed-off-by: Nicholas Walter Knize <nknize@apache.org>
…st index deletion. (opensearch-project#8472)

Signed-off-by: Harish Bhakuni <hbhakuni@amazon.com>
…ckpoint validation (opensearch-project#8889)

* Fix test testDropPrimaryDuringReplication and clean up ReplicationCheckpoint validation.

This test is now occasionally failing with replicas having 0 documents. This occurs in a couple of ways:
1. After dropping the old primary the new primary is not publishing a checkpoint to replicas unless it indexes docs from translog after flipping to primary mode.
If there is nothing to index, it will not publish a checkpoint, but the other replica could have never sync'd with the original primary and be left out of date.
- This PR fixes this by force publishing a checkpoint after the new primary flips to primary mode.
2. The replica receives a checkpoint post failover and cancels its sync with the former primary that is still active, recognizing a primary term bump.
However this cancellation is async and immediately starting a new replication event could fail as its still replicating.
- This PR fixes this by attempting to process the latest received checkpoint on failure, if the shard is not failed and still behind.

This PR also introduces a few changes to ensure the accuracy of the ReplicationCheckpoint tracked on primary & replicas.
- Ensure the checkpoint stored in SegmentReplicationTarget is the checkpoint passed from the primary and not locally computed.  This ensures checks for primary term are accurate and not using a locally compued operationPrimaryTerm.
- Introduces a refresh listener for both primary & replica to update the ReplicationCheckpoint and store it in replicationTracker post refresh rather than redundantly computing when accessed.
- Removes unnecessary onCheckpointPublished method used to start replication timers manually.  This will happen automatically on primaries once its local cp is updated.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Handle NoSuchFileException when attempting to delete decref'd files.

To avoid divergent logic with remote store, we always incref/decref the segmentinfos.files(true) which includes the segments_n file.
Decref to 0 will attempt to delete the file from the store and its possible this _n file does not yet exist. This change will ignore if we get a noSuchFile while attempting to delete.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Add more unit tests.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Clean up IndexShardTests.testCheckpointReffreshListenerWithNull

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Remove unnecessary catch for NoSuchFileException.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* Add another test for non segrep.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* PR Feedback.

Signed-off-by: Marc Handalian <handalm@amazon.com>

* re-compute replication checkpoint on primary promotion.

Signed-off-by: Marc Handalian <handalm@amazon.com>

---------

Signed-off-by: Marc Handalian <handalm@amazon.com>
Signed-off-by: Vacha Shah <vachshah@amazon.com>
@github-actions
Copy link
Contributor

github-actions bot commented Aug 3, 2023

Gradle Check (Jenkins) Run Completed with:

@opensearch-trigger-bot
Copy link
Contributor

Compatibility status:



> Task :checkCompatibility
Incompatible components: [https://github.com/opensearch-project/geospatial.git, https://github.com/opensearch-project/security-analytics.git, https://github.com/opensearch-project/anomaly-detection.git, https://github.com/opensearch-project/asynchronous-search.git, https://github.com/opensearch-project/performance-analyzer.git]
Compatible components: [https://github.com/opensearch-project/security.git, https://github.com/opensearch-project/notifications.git, https://github.com/opensearch-project/neural-search.git, https://github.com/opensearch-project/index-management.git, https://github.com/opensearch-project/sql.git, https://github.com/opensearch-project/opensearch-oci-object-storage.git, https://github.com/opensearch-project/job-scheduler.git, https://github.com/opensearch-project/observability.git, https://github.com/opensearch-project/k-nn.git, https://github.com/opensearch-project/alerting.git, https://github.com/opensearch-project/cross-cluster-replication.git, https://github.com/opensearch-project/common-utils.git, https://github.com/opensearch-project/reporting.git, https://github.com/opensearch-project/performance-analyzer-rca.git, https://github.com/opensearch-project/ml-commons.git]

BUILD SUCCESSFUL in 33m 18s

@gashutos
Copy link
Contributor

gashutos commented Aug 14, 2023

Strictly speaking from performance dimension,
I was just benchmarking netty http layer for search path. We hardly spend 1 or 2 ms in netty layer per request.
Any benchmark available for protobuf ?

@opensearch-trigger-bot
Copy link
Contributor

This PR is stalled because it has been open for 30 days with no activity. Remove stalled label or comment or this will be closed in 7 days.

@opensearch-trigger-bot opensearch-trigger-bot bot added the stalled Issues that have stalled label Sep 13, 2023
@dblock
Copy link
Member

dblock commented Oct 2, 2023

@VachaShah Given #6844 (comment), do you have any ideas about how to bring this into OpenSearch as maybe an experimental feature? What are next steps?

@VachaShah
Copy link
Collaborator Author

@VachaShah Given #6844 (comment), do you have any ideas about how to bring this into OpenSearch as maybe an experimental feature? What are next steps?

I am working on getting the changes from the POC in #9097 to merge in the repo. We are also working on getting the numbers for APIs like search.

@opensearch-trigger-bot opensearch-trigger-bot bot removed the stalled Issues that have stalled label Oct 3, 2023
* Constructs a new transport message with the data from the {@link byte[]}. This is
* currently a no-op
*/
public TransportMessage(byte[] in) {}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we avoid adding this change by reusing the existing StreamInput?

Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really exciting work - thanks for getting this out there.

For the purpose of this POC, as of now the files are separate.

What our your thoughts on the end state for this change? I'm resistant to merge so much duplicate code.

* @param out Output to write the {@code value} too
* @param value The value to add
*/
void write(OutputStream out, V value) throws IOException;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its strange than the writer is via an OutputStream, but the reader isn't a symetrical version that uses InputStream. Can we align the types to be consistant, be it bytes[] or *Stream?

*
* @opensearch.internal
*/
public class ProtobufTask {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This class is nearly identical to the existing Task, why do we need a largely identical protobuf version? This seems to imply there is coupling between the existing OpenSearch data models and their serialized form. Can we decompose these relationships more?

Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~ Duplicate comment ~

@VachaShah
Copy link
Collaborator Author

This is really exciting work - thanks for getting this out there.

For the purpose of this POC, as of now the files are separate.

What our your thoughts on the end state for this change? I'm resistant to merge so much duplicate code.

Hi @peternied, thank you for your comments! This PR is not be merged, its a draft PR for the work done to get numbers for which I created a parallel API _cat/nodes_protobuf to compare with the original version. The changes to be merged will go in incremental PRs, which I will raise later. This PR is just out here in the draft state for the work done for the POC.

@VachaShah
Copy link
Collaborator Author

This is really exciting work - thanks for getting this out there.

For the purpose of this POC, as of now the files are separate.

What our your thoughts on the end state for this change? I'm resistant to merge so much duplicate code.

Hi @peternied, thank you for your comments! This PR is not be merged, its a draft PR for the work done to get numbers for which I created a parallel API _cat/nodes_protobuf to compare with the original version. The changes to be merged will go in incremental PRs, which I will raise later. This PR is just out here in the draft state for the work done for the POC.

I think I was not clear in the description of the PR, so I added a comment about this as well.

@dblock
Copy link
Member

dblock commented Oct 13, 2023

Any parts of this PR that can be merged? Otherwise I think we can document this in #6844 and close it? (And nice work!)

@VachaShah
Copy link
Collaborator Author

Any parts of this PR that can be merged? Otherwise I think we can document this in #6844 and close it? (And nice work!)

Thank you @dblock! I will raise incremental PRs for this, so closing this one now. I will make sure to update #6844 to document this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants