Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Serialization bugs can cause node drops #1885

Merged
merged 3 commits into from
Jan 14, 2022

Conversation

reta
Copy link
Collaborator

@reta reta commented Jan 11, 2022

Signed-off-by: Andriy Redko andriy.redko@aiven.io

Description

Most commonly backward incompatible changes to APIs like stats can cause ser/de over the wire to fail. A bit more details regarding the (de)serialization handling inside InboundHandler:

  1. If there is a serialization bug it would manifest itself in the InboundHandler#messageReceived causing to throw an IllegalStateException.

Interestingly, it is only raised when input stream has unconsumed bytes, the case when stream has less bytes than needed to reconstruct the payload is not classified as serialization issue (resulting in java.io.EOFExceptions). More importantly, there is a difference if the message is request or response:

  • if the message is request than all exceptions are caught inside handleRequest, the exceptions are not escalated to TcpTransport#handleException and the error response is returned
  • if the message is response than there are two outcomes:
    • when input stream has unconsumed bytes, the IllegalStateException is thrown, escalated up to TcpTransport#handleException which ends up closing the specific TCP channel that caused the serialization issue, example is below
     [2022-01-12T09:09:57,119][WARN ][o.o.t.TcpTransport       ] [...] exception caught on transport layer [Netty4TcpChannel{localAddress=/127.0.0.1:58454, remoteAddress=127.0.0.1/127.0.0.1:9300}], closing connection
     java.lang.IllegalStateException: Message not fully read (response) for requestId [131], handler [org.opensearch.transport.TransportService$ContextRestoreResponseHandler/org.opensearch.transport.TransportService$6/[cluster:monitor/state]:org.opensearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$1@35ecd191/org.opensearch.action.support.ChannelActionListener@798eebdf], error [false]; resetting
             at org.opensearch.transport.InboundHandler.messageReceived(InboundHandler.java:166) ~[opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
             at org.opensearch.transport.InboundHandler.inboundMessage(InboundHandler.java:108) ~[opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
             at org.opensearch.transport.TcpTransport.inboundMessage(TcpTransport.java:759) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
             at org.opensearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:170) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
             at org.opensearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:145) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
             at org.opensearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:110) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
             at org.opensearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:94) [transport-netty4-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
             at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) [netty-handler-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.69.Final.jar:4.1.69.Final]
             at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.69.Final.jar:4.1.69.Final]
             at java.lang.Thread.run(Thread.java:833) [?:?]
     [2022-01-12T09:09:57,123][INFO ][o.o.c.c.Coordinator      ] [...] master node [{runTask-0}{jQ-T77-VRrua5dCDqczv3A}{XWRZk1ptRHKYMteMm2bWyQ}{127.0.0.1}{127.0.0.1:9300}{dimr}{testattr=test}] failed, restarting discovery
     org.opensearch.transport.NodeDisconnectedException: [runTask-0][127.0.0.1:9300][disconnected] disconnected
    
    
    • when input stream throws java.io.EOFException (or any other issue manifests during the deserialization), it is wrapped into TransportSerializationException and the error response is sent, not reaching the TcpTransport#handleException in any way

The suggested changes are intended to report more details regarding the cause of the (de)serialization failure and provide uniformity in handling (de)serialization issues:

  1. Treat java.io.EOFException as (de)serialization issue, wrapping it into the IllegalStateException ("Message fully read (request) but more data is expected for requestId ..."): it does not lead to any changes in current behaviour, only more informative message is going to be returned to the client.
  2. For responses, when input stream has unconsumed bytes, through IllegalStateException and wrap it into TransportSerializationException, it changes the current behaviour: the exception is handled in a regular way, without escalating to TcpTransport#handleException and closing the channel (desired outcome) (the same way java.io.EOFException are handled). The example is below (to compare with the previous stack trace):
[2022-01-12T08:41:56,603][WARN ][o.o.t.InboundHandler     ] [...] Failed to deserialize response from [127.0.0.1/127.0.0.1:9300]
org.opensearch.transport.TransportSerializationException: Failed to deserialize response from handler [org.opensearch.transport.TransportService$ContextRestoreResponseHandler/org.opensearch.transport.TransportService$6/[cluster:monitor/state]:org.opensearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$1@1314b3b2/org.opensearch.action.support.ChannelActionListener@4770f3a8]
        at org.opensearch.transport.InboundHandler.handleResponse(InboundHandler.java:320) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at org.opensearch.transport.InboundHandler.messageReceived(InboundHandler.java:155) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at org.opensearch.transport.InboundHandler.inboundMessage(InboundHandler.java:109) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at org.opensearch.transport.TcpTransport.inboundMessage(TcpTransport.java:759) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at org.opensearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:170) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at org.opensearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:145) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at org.opensearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:110) [opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at org.opensearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:94) [transport-netty4-client-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) [netty-handler-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.69.Final.jar:4.1.69.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.69.Final.jar:4.1.69.Final]
        at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: java.lang.IllegalStateException: Message not fully read (response) for requestId [18], handler [org.opensearch.transport.TransportService$ContextRestoreResponseHandler/org.opensearch.transport.TransportService$6/[cluster:monitor/state]:org.opensearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$1@1314b3b2/org.opensearch.action.support.ChannelActionListener@4770f3a8], error [false]; resetting
        at org.opensearch.transport.InboundHandler.handleResponse(InboundHandler.java:308) ~[opensearch-2.0.0-SNAPSHOT.jar:2.0.0-SNAPSHOT]
        ... 26 more

Issues Resolved

Closes #624

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@opensearch-ci-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@opensearch-ci-bot
Copy link
Collaborator

❌   Gradle Check failure 95da4fd94f8a46ec9e79e14fc48a334fa1a5d69f
Log 1860

Reports 1860

@dblock dblock requested a review from andrross January 11, 2022 19:14
@opensearch-ci-bot
Copy link
Collaborator

❌   Gradle Check failure b12d93f48c3558b923e50f12a2977268df3507f0
Log 1862

Reports 1862

@opensearch-ci-bot
Copy link
Collaborator

✅   Gradle Check success 5bdeb781ed5c1508c838408f989c7e5a334b3224
Log 1864

Reports 1864

@opensearch-ci-bot
Copy link
Collaborator

❌   Gradle Check failure 18ba5adcd5d00f6be0b26a6a260055c8b9bbb0a9
Log 1874

Reports 1874

Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
@opensearch-ci-bot
Copy link
Collaborator

✅   Gradle Check success 8238308
Log 1875

Reports 1875

}
} catch (EOFException e) {
// Another favor of (de)serialization issues is when stream contains less bytes than
// the request handler needs to deserialize the payload.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explicitly handle EOFException as a favour of (de)serialization failure.

@@ -297,6 +300,23 @@ private static void sendErrorResponse(String actionName, TransportChannel transp
try {
response = handler.read(stream);
response.remoteAddress(new TransportAddress(remoteAddress));

if (stream != EMPTY_STREAM_INPUT) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Treat unconsumed stream as (de)serialization failure and handle it as TransportSerializationException

// Check the entire message has been read
final int nextByte = stream.read();
// calling read() is useful to make sure the message is fully read, even if there is an EOS marker
if (nextByte != -1) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Treat unconsumed stream as (de)serialization failure and handle it as TransportSerializationException

Copy link
Collaborator

@Bukhtawar Bukhtawar Jan 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are redundant code blocks for

  1. handlerResponseError
  2. handleResponse
    Can we simply this further

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those two are slightly different (fe logging, remote address setting), I am not sure the simplification would make sense here (unless we alter the behaviour slightly)

@@ -211,7 +224,7 @@ public TestResponse read(StreamInput in) throws IOException {

BytesReference fullResponseBytes = channel.getMessageCaptor().get();
BytesReference responseContent = fullResponseBytes.slice(headerSize, fullResponseBytes.length() - headerSize);
Header responseHeader = new Header(fullRequestBytes.length() - 6, requestId, responseStatus, version);
Header responseHeader = new Header(fullResponseBytes.length() - 6, requestId, responseStatus, version);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be an issues with copy/paste :-)

@reta reta changed the title [WIP] [BUG] Serialization bugs can cause node drops [BUG] Serialization bugs can cause node drops Jan 12, 2022
@reta reta marked this pull request as ready for review January 12, 2022 15:30
@reta reta requested a review from a team as a code owner January 12, 2022 15:30
if (nextByte != -1) {

try {
final T request = reg.newRequest(stream);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The EOFException will come from this line, right? I'd prefer to keep the EOFException try/catch more tightly bounded in order to make things easier to follow with less indentation. What do you think?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The EOFException will come from this line, right?

Correct

I'd prefer to keep the EOFException try/catch more tightly bounded in order to make things easier to follow with less indentation. What do you think?

I thought about that as well, the issue is that request is used at the end of the code block as well

...

if (ThreadPool.Names.SAME.equals(executor)) {
    try {
        reg.processMessageReceived(request, transportChannel);
    } catch (Exception e) {
        sendErrorResponse(reg.getAction(), transportChannel, e);
    }
} else {
    threadPool.executor(executor).execute(new RequestHandler<>(reg, request, transportChannel));
}

So the alternative would introduce nullable request instance, no sure it is better (but readability would suffer I think). With the IllegalStateException check wrapped into dedicated function it may look better actually

Copy link
Member

@andrross andrross Jan 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the catch block always throws you can do this:

final T request;
try {
    request = reg.newRequest(stream);
} catch (EOFException e) {
    // Another favor of (de)serialization issues is when stream contains less bytes than
    // the request handler needs to deserialize the payload.
    throw new IllegalStateException(
        "Message fully read (request) but more data is expected for requestId ["
            + requestId
            + "], action ["
            + action
            + "]; resetting",
        e
    );
}
request.remoteAddress(new TransportAddress(channel.getRemoteAddress()));
checkStreamIsFullyConsumed(requestId, action, stream);

final String executor = reg.getExecutor();
if (ThreadPool.Names.SAME.equals(executor)) {
    try {
        reg.processMessageReceived(request, transportChannel);
    } catch (Exception e) {
        sendErrorResponse(reg.getAction(), transportChannel, e);
    }
} else {
    threadPool.executor(executor).execute(new RequestHandler<>(reg, request, transportChannel));
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That the part I don't like to be fair:

final T request;

try {
    request = reg.newRequest(stream);
}

I would try with the function instead, just a moment

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andrross done, thank you!

Copy link
Collaborator

@Bukhtawar Bukhtawar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the thorough tests @reta.

Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
@opensearch-ci-bot
Copy link
Collaborator

❌   Gradle Check failure f8f22d1
Log 1881

Reports 1881

@reta
Copy link
Collaborator Author

reta commented Jan 12, 2022

x Gradle Check failure f8f22d1 Log 1881

Reports 1881

Hehe .... bad luck ...

Bad Gateway
      > Could not resolve com.diffplug.spotless:spotless-lib-extra:2.7.0.
        Required by:
            project : > com.diffplug.spotless:com.diffplug.spotless.gradle.plugin:5.6.1 > com.diffplug.spotless:spotless-plugin-gradle:5.6.1
         > Could not resolve com.diffplug.spotless:spotless-lib-extra:2.7.0.
            > Could not get resource 'https://plugins.gradle.org/m2/com/diffplug/spotless/spotless-lib-extra/2.7.0/spotless-lib-extra-2.7.0.module'.
               > Could not GET 'https://jcenter.bintray.com/com/diffplug/spotless/spotless-lib-extra/2.7.0/spotless-lib-extra-2.7.0.module'. Received status code 502 from server: Bad Gateway
      > Could not resolve com.diffplug.durian:durian-core:1.2.0.
        Required by:
            project : > com.diffplug.spotless:com.diffplug.spotless.gradle.plugin:5.6.1 > com.diffplug.spotless:spotless-plugin-gradle:5.6.1
         > Skipped due to earlier error
      > Could not resolve com.diffplug.durian:durian-io:1.2.0.
        Required by:
            project : > com.diffplug.spotless:com.diffplug.spotless.gradle.plugin:5.6.1 > com.diffplug.spotless:spotless-plugin-gradle:5.6.1
         > Skipped due to earlier error
      > Could not resolve com.diffplug.durian:durian-collect:1.2.0.
        Required by:
            project : > com.diffplug.spotless:com.diffplug.spotless.gradle.plugin:5.6.1 > com.diffplug.spotless:spotless-plugin-gradle:5.6.1
         > Skipped due to earlier error
      > Could not resolve org.eclipse.jgit:org.eclipse.jgit:5.8.0.202006091008-r.
        Required by:
            project : > com.diffplug.spotless:com.diffplug.spotless.gradle.plugin:5.6.1 > com.diffplug.spotless:spotless-plugin-gradle:5.6.1
         > Skipped due to earlier error

@opensearch-ci-bot
Copy link
Collaborator

❌   Gradle Check failure ba3b9fb2f2c1568520ab68bef77cfd1867f8ca3b
Log 1882

Reports 1882

@opensearch-ci-bot
Copy link
Collaborator

✅   Gradle Check success 7d5eb12c9c57cc9fca4a7c42a7344516b97bd472
Log 1903

Reports 1903

@opensearch-ci-bot
Copy link
Collaborator

❌   Gradle Check failure e4d2a618b8d2c90f303d5096a38c0ed07f5b4ce8
Log 1904

Reports 1904

Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
@opensearch-ci-bot
Copy link
Collaborator

❌   Gradle Check failure 8889dcf8ad187d6660a088a67e66f1e1c47907d8
Log 1905

Reports 1905

* @param error "true" if response represents error, "false" otherwise
* @throws IOException IOException
*/
private void checkStreamIsFullyConsumed(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we also closing the stream on exception?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes but not in the handler, afaik the streams for InboundMessage are closed within InboundPipeline

@opensearch-ci-bot
Copy link
Collaborator

✅   Gradle Check success 2834ec6
Log 1906

Reports 1906

@nknize nknize merged commit f059738 into opensearch-project:main Jan 14, 2022
reta added a commit to reta/OpenSearch that referenced this pull request Jan 14, 2022
This commit restructures InboundHandler to ensure all data 
is consumed over the wire.

Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
@nknize nknize added bug Something isn't working pending backport Identifies an issue or PR that still needs to be backported v2.0.0 Version 2.0.0 labels Jan 14, 2022
nknize pushed a commit that referenced this pull request Jan 14, 2022
This commit restructures InboundHandler to ensure all data 
is consumed over the wire.

Signed-off-by: Andriy Redko <andriy.redko@aiven.io>
@nknize nknize removed the pending backport Identifies an issue or PR that still needs to be backported label Jan 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v2.0.0 Version 2.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Serialization bugs can cause node drops
5 participants