Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First WebClient request always fails after some idle time #764

Closed
martin-tarjanyi opened this issue Jun 22, 2019 · 22 comments
Closed

First WebClient request always fails after some idle time #764

martin-tarjanyi opened this issue Jun 22, 2019 · 22 comments
Labels
for/user-attention This issue needs user attention (feedback, rework, etc...)

Comments

@martin-tarjanyi
Copy link

Expected behavior

WebClient HTTP request succeeds after 5 minutes of idle time.

Actual behavior

WebClient HTTP request fails after 5 minutes of idle time.
If read timeout is set, then I get ReadTimeoutException.
If not set, I get the following exception:

java.io.IOException: An existing connection was forcibly closed by the remote host
	at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_181]
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_181]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_181]
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1125) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) ~[netty-common-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.36.Final.jar:4.1.36.Final]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181]

Steps to reproduce

1, Start application: https://github.com/martin-tarjanyi/webclient-bug/blob/master/src/main/java/com/example/webclientbug/WebclientBugApplication.java
2, Call http://localhost:8080/get - request succeeds
3, Wait ~5 minutes
4, Call http://localhost:8080/get - request fails

Reactor Netty version

0.8.9-RELEASE

JVM version (e.g. java -version)

1.8.0_181

OS version (e.g. uname -a)

Windows 10

Related issue

https://stackoverflow.com/questions/55621800/springboot-webclient-throws-an-existing-connection-was-forcibly-closed-by-the-r

@martin-tarjanyi martin-tarjanyi added the type/bug A general bug label Jun 22, 2019
@martin-tarjanyi martin-tarjanyi changed the title First WebClient request always fail after some idle time First WebClient request always fails after some idle time Jun 22, 2019
@violetagg
Copy link
Member

@martin-tarjanyi I cannot reproduce this.
Can you do the following:

Enable wire log:

TcpClient tcpClient = TcpClient.create()
                               .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000)
                               .doOnConnected(doOnConnectedConsumer)
===>                           .wiretap(true);

And in application.properties => logging.level.reactor.netty=debug

Then reproduce and attach here the log file.

@violetagg violetagg added the for/user-attention This issue needs user attention (feedback, rework, etc...) label Jun 25, 2019
@martin-tarjanyi
Copy link
Author

First request: 21:11 - success
Second request: 21:16 - success
Third request: 21:21 - failure

"C:\Program Files\Java\jdk1.8.0_181\bin\java.exe" -XX:TieredStopAtLevel=1 -noverify -Dspring.output.ansi.enabled=always -Dcom.sun.management.jmxremote -Dspring.jmx.enabled=true -Dspring.liveBeansView.mbeanDomain -Dspring.application.admin.enabled=true "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA 2018.3\lib\idea_rt.jar=3091:C:\Program Files\JetBrains\IntelliJ IDEA 2018.3\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_181\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_181\jre\lib\rt.jar;D:\dev\webclient-bug\target\classes;C:\Users\X\.m2\repository\org\springframework\boot\spring-boot-starter-webflux\2.1.6.RELEASE\spring-boot-starter-webflux-2.1.6.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\boot\spring-boot-starter\2.1.6.RELEASE\spring-boot-starter-2.1.6.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\boot\spring-boot\2.1.6.RELEASE\spring-boot-2.1.6.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\spring-context\5.1.8.RELEASE\spring-context-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\spring-aop\5.1.8.RELEASE\spring-aop-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\spring-expression\5.1.8.RELEASE\spring-expression-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\boot\spring-boot-autoconfigure\2.1.6.RELEASE\spring-boot-autoconfigure-2.1.6.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\boot\spring-boot-starter-logging\2.1.6.RELEASE\spring-boot-starter-logging-2.1.6.RELEASE.jar;C:\Users\X\.m2\repository\ch\qos\logback\logback-classic\1.2.3\logback-classic-1.2.3.jar;C:\Users\X\.m2\repository\ch\qos\logback\logback-core\1.2.3\logback-core-1.2.3.jar;C:\Users\X\.m2\repository\org\apache\logging\log4j\log4j-to-slf4j\2.11.2\log4j-to-slf4j-2.11.2.jar;C:\Users\X\.m2\repository\org\apache\logging\log4j\log4j-api\2.11.2\log4j-api-2.11.2.jar;C:\Users\X\.m2\repository\org\slf4j\jul-to-slf4j\1.7.26\jul-to-slf4j-1.7.26.jar;C:\Users\X\.m2\repository\javax\annotation\javax.annotation-api\1.3.2\javax.annotation-api-1.3.2.jar;C:\Users\X\.m2\repository\org\yaml\snakeyaml\1.23\snakeyaml-1.23.jar;C:\Users\X\.m2\repository\org\springframework\boot\spring-boot-starter-json\2.1.6.RELEASE\spring-boot-starter-json-2.1.6.RELEASE.jar;C:\Users\X\.m2\repository\com\fasterxml\jackson\core\jackson-databind\2.9.9\jackson-databind-2.9.9.jar;C:\Users\X\.m2\repository\com\fasterxml\jackson\core\jackson-annotations\2.9.0\jackson-annotations-2.9.0.jar;C:\Users\X\.m2\repository\com\fasterxml\jackson\core\jackson-core\2.9.9\jackson-core-2.9.9.jar;C:\Users\X\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jdk8\2.9.9\jackson-datatype-jdk8-2.9.9.jar;C:\Users\X\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jsr310\2.9.9\jackson-datatype-jsr310-2.9.9.jar;C:\Users\X\.m2\repository\com\fasterxml\jackson\module\jackson-module-parameter-names\2.9.9\jackson-module-parameter-names-2.9.9.jar;C:\Users\X\.m2\repository\org\springframework\boot\spring-boot-starter-reactor-netty\2.1.6.RELEASE\spring-boot-starter-reactor-netty-2.1.6.RELEASE.jar;C:\Users\X\.m2\repository\io\projectreactor\netty\reactor-netty\0.8.9.RELEASE\reactor-netty-0.8.9.RELEASE.jar;C:\Users\X\.m2\repository\io\netty\netty-codec-http\4.1.36.Final\netty-codec-http-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-common\4.1.36.Final\netty-common-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-buffer\4.1.36.Final\netty-buffer-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-transport\4.1.36.Final\netty-transport-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-resolver\4.1.36.Final\netty-resolver-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-codec\4.1.36.Final\netty-codec-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-codec-http2\4.1.36.Final\netty-codec-http2-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-handler\4.1.36.Final\netty-handler-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-handler-proxy\4.1.36.Final\netty-handler-proxy-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-codec-socks\4.1.36.Final\netty-codec-socks-4.1.36.Final.jar;C:\Users\X\.m2\repository\io\netty\netty-transport-native-epoll\4.1.36.Final\netty-transport-native-epoll-4.1.36.Final-linux-x86_64.jar;C:\Users\X\.m2\repository\io\netty\netty-transport-native-unix-common\4.1.36.Final\netty-transport-native-unix-common-4.1.36.Final.jar;C:\Users\X\.m2\repository\org\glassfish\javax.el\3.0.0\javax.el-3.0.0.jar;C:\Users\X\.m2\repository\org\hibernate\validator\hibernate-validator\6.0.17.Final\hibernate-validator-6.0.17.Final.jar;C:\Users\X\.m2\repository\javax\validation\validation-api\2.0.1.Final\validation-api-2.0.1.Final.jar;C:\Users\X\.m2\repository\org\jboss\logging\jboss-logging\3.3.2.Final\jboss-logging-3.3.2.Final.jar;C:\Users\X\.m2\repository\com\fasterxml\classmate\1.4.0\classmate-1.4.0.jar;C:\Users\X\.m2\repository\org\springframework\spring-web\5.1.8.RELEASE\spring-web-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\spring-beans\5.1.8.RELEASE\spring-beans-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\spring-webflux\5.1.8.RELEASE\spring-webflux-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\org\synchronoss\cloud\nio-multipart-parser\1.1.0\nio-multipart-parser-1.1.0.jar;C:\Users\X\.m2\repository\org\slf4j\slf4j-api\1.7.26\slf4j-api-1.7.26.jar;C:\Users\X\.m2\repository\org\synchronoss\cloud\nio-stream-storage\1.1.3\nio-stream-storage-1.1.3.jar;C:\Users\X\.m2\repository\org\springframework\spring-core\5.1.8.RELEASE\spring-core-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\org\springframework\spring-jcl\5.1.8.RELEASE\spring-jcl-5.1.8.RELEASE.jar;C:\Users\X\.m2\repository\io\projectreactor\reactor-core\3.2.10.RELEASE\reactor-core-3.2.10.RELEASE.jar;C:\Users\X\.m2\repository\org\reactivestreams\reactive-streams\1.0.2\reactive-streams-1.0.2.jar" com.example.webclientbug.WebclientBugApplication

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.6.RELEASE)

2019-06-25 21:10:45.432  INFO 1520 --- [           main] c.e.w.WebclientBugApplication            : Starting WebclientBugApplication on DESKTOP-R3SMBEM with PID 1520 (D:\dev\webclient-bug\target\classes started by X in D:\dev\webclient-bug)
2019-06-25 21:10:45.436  INFO 1520 --- [           main] c.e.w.WebclientBugApplication            : No active profile set, falling back to default profiles: default
2019-06-25 21:10:46.300 DEBUG 1520 --- [           main] reactor.netty.tcp.TcpResources           : [http] resources will use the default LoopResources: DefaultLoopResources {prefix=reactor-http, daemon=true, selectCount=4, workerCount=4}
2019-06-25 21:10:46.303 DEBUG 1520 --- [           main] reactor.netty.tcp.TcpResources           : [http] resources will use the default ConnectionProvider: PooledConnectionProvider {name=http, poolFactory=reactor.netty.resources.ConnectionProvider$$Lambda$211/709133385@2dc995f4}
2019-06-25 21:10:46.639 DEBUG 1520 --- [           main] reactor.netty.tcp.TcpResources           : [tcp] resources will use the default LoopResources: DefaultLoopResources {prefix=reactor-tcp, daemon=true, selectCount=4, workerCount=4}
2019-06-25 21:10:46.641 DEBUG 1520 --- [           main] reactor.netty.tcp.TcpResources           : [tcp] resources will use the default ConnectionProvider: PooledConnectionProvider {name=tcp, poolFactory=reactor.netty.resources.ConnectionProvider$$Lambda$211/709133385@2dc995f4}
2019-06-25 21:10:47.128 DEBUG 1520 --- [           main] r.netty.resources.DefaultLoopEpoll       : Default Epoll support : false
2019-06-25 21:10:47.128 DEBUG 1520 --- [           main] r.netty.resources.DefaultLoopKQueue      : Default KQueue support : false
2019-06-25 21:10:47.380 DEBUG 1520 --- [ctor-http-nio-1] reactor.netty.tcp.TcpServer              : [id: 0x227e7941, L:/0:0:0:0:0:0:0:0:8080] Bound new server
2019-06-25 21:10:47.382  INFO 1520 --- [           main] o.s.b.web.embedded.netty.NettyWebServer  : Netty started on port(s): 8080
2019-06-25 21:10:47.385  INFO 1520 --- [           main] c.e.w.WebclientBugApplication            : Started WebclientBugApplication in 2.512 seconds (JVM running for 3.644)
2019-06-25 21:11:08.506 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] New http connection, requesting read
2019-06-25 21:11:08.506 DEBUG 1520 --- [ctor-http-nio-2] r.n.http.server.HttpServerOperations     : [id: 0x7e4ac947, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3109] New http connection, requesting read
2019-06-25 21:11:08.507 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.channel.BootstrapHandlers  : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:11:08.508 DEBUG 1520 --- [ctor-http-nio-2] reactor.netty.channel.BootstrapHandlers  : [id: 0x7e4ac947, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3109] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:11:08.535 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Increasing pending responses, now 1
2019-06-25 21:11:08.543 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.http.server.HttpServer     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter@78143be2
2019-06-25 21:11:09.045 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : Creating new client pool [tcp] for jsonplaceholder.typicode.com:443
2019-06-25 21:11:09.046 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3] Created new pooled channel, now 0 active connections and 1 inactive connections
2019-06-25 21:11:09.064 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.SslProvider            : [id: 0x635cb3c3] SSL enabled using engine SSLEngineImpl and SNI jsonplaceholder.typicode.com:443
2019-06-25 21:11:09.067 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.channel.BootstrapHandlers  : [id: 0x635cb3c3] Initialized pipeline DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.sslReader = reactor.netty.tcp.SslProvider$SslReadHandler), (BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (SimpleChannelPool$1#0 = io.netty.channel.pool.SimpleChannelPool$1), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:11:09.067 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3] REGISTERED
2019-06-25 21:11:09.120 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3] CONNECT: jsonplaceholder.typicode.com/172.64.128.28:443
2019-06-25 21:11:09.205 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] ACTIVE
2019-06-25 21:11:09.206 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Registering pool release on close event for channel
2019-06-25 21:11:09.206 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Channel connected, now 1 active connections and 0 inactive connections
2019-06-25 21:11:09.263 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ COMPLETE
2019-06-25 21:11:09.436 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ COMPLETE
2019-06-25 21:11:09.473 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] USER_EVENT: SslHandshakeCompletionEvent(SUCCESS)
2019-06-25 21:11:09.474 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}, [connected])
2019-06-25 21:11:09.476 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [configured])
2019-06-25 21:11:09.484 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.ReactorNetty               : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Added decoder [WriteTimeoutHandler] at the end of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.loggingHandler, reactor.left.httpCodec, WriteTimeoutHandler, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
2019-06-25 21:11:09.484 DEBUG 1520 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect    : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Handler is being applied: {uri=https://jsonplaceholder.typicode.com/todos/1/?param=value, method=GET}
2019-06-25 21:11:09.502 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] WRITE: 127B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 47 45 54 20 2f 74 6f 64 6f 73 2f 31 2f 3f 70 61 |GET /todos/1/?pa|
|00000010| 72 61 6d 3d 76 61 6c 75 65 20 48 54 54 50 2f 31 |ram=value HTTP/1|
|00000020| 2e 31 0d 0a 75 73 65 72 2d 61 67 65 6e 74 3a 20 |.1..user-agent: |
|00000030| 52 65 61 63 74 6f 72 4e 65 74 74 79 2f 30 2e 38 |ReactorNetty/0.8|
|00000040| 2e 39 2e 52 45 4c 45 41 53 45 0d 0a 68 6f 73 74 |.9.RELEASE..host|
|00000050| 3a 20 6a 73 6f 6e 70 6c 61 63 65 68 6f 6c 64 65 |: jsonplaceholde|
|00000060| 72 2e 74 79 70 69 63 6f 64 65 2e 63 6f 6d 0d 0a |r.typicode.com..|
|00000070| 61 63 63 65 70 74 3a 20 2a 2f 2a 0d 0a 0d 0a    |accept: */*.... |
+--------+-------------------------------------------------+----------------+
2019-06-25 21:11:09.504 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] FLUSH
2019-06-25 21:11:09.505 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] WRITE: 0B
2019-06-25 21:11:09.505 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] FLUSH
2019-06-25 21:11:09.505 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/todos/1/?param=value, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [request_sent])
2019-06-25 21:11:09.506 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ COMPLETE
2019-06-25 21:11:09.958 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ: 849B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 48 54 54 50 2f 31 2e 31 20 32 30 30 20 4f 4b 0d |HTTP/1.1 200 OK.|
|00000010| 0a 44 61 74 65 3a 20 54 75 65 2c 20 32 35 20 4a |.Date: Tue, 25 J|
|00000020| 75 6e 20 32 30 31 39 20 31 39 3a 31 31 3a 30 39 |un 2019 19:11:09|
|00000030| 20 47 4d 54 0d 0a 43 6f 6e 74 65 6e 74 2d 54 79 | GMT..Content-Ty|
|00000040| 70 65 3a 20 61 70 70 6c 69 63 61 74 69 6f 6e 2f |pe: application/|
|00000050| 6a 73 6f 6e 3b 20 63 68 61 72 73 65 74 3d 75 74 |json; charset=ut|
|00000060| 66 2d 38 0d 0a 43 6f 6e 74 65 6e 74 2d 4c 65 6e |f-8..Content-Len|
|00000070| 67 74 68 3a 20 38 33 0d 0a 43 6f 6e 6e 65 63 74 |gth: 83..Connect|
|00000080| 69 6f 6e 3a 20 6b 65 65 70 2d 61 6c 69 76 65 0d |ion: keep-alive.|
|00000090| 0a 53 65 74 2d 43 6f 6f 6b 69 65 3a 20 5f 5f 63 |.Set-Cookie: __c|
|000000a0| 66 64 75 69 64 3d 64 31 32 65 39 39 65 36 32 64 |fduid=d12e99e62d|
|000000b0| 64 30 38 35 61 30 34 61 35 64 62 64 30 36 66 61 |d085a04a5dbd06fa|
|000000c0| 32 63 64 39 33 33 61 31 35 36 31 34 38 39 38 36 |2cd933a156148986|
|000000d0| 39 3b 20 65 78 70 69 72 65 73 3d 57 65 64 2c 20 |9; expires=Wed, |
|000000e0| 32 34 2d 4a 75 6e 2d 32 30 20 31 39 3a 31 31 3a |24-Jun-20 19:11:|
|000000f0| 30 39 20 47 4d 54 3b 20 70 61 74 68 3d 2f 3b 20 |09 GMT; path=/; |
|00000100| 64 6f 6d 61 69 6e 3d 2e 74 79 70 69 63 6f 64 65 |domain=.typicode|
|00000110| 2e 63 6f 6d 3b 20 48 74 74 70 4f 6e 6c 79 0d 0a |.com; HttpOnly..|
|00000120| 58 2d 50 6f 77 65 72 65 64 2d 42 79 3a 20 45 78 |X-Powered-By: Ex|
|00000130| 70 72 65 73 73 0d 0a 56 61 72 79 3a 20 4f 72 69 |press..Vary: Ori|
|00000140| 67 69 6e 2c 20 41 63 63 65 70 74 2d 45 6e 63 6f |gin, Accept-Enco|
|00000150| 64 69 6e 67 0d 0a 41 63 63 65 73 73 2d 43 6f 6e |ding..Access-Con|
|00000160| 74 72 6f 6c 2d 41 6c 6c 6f 77 2d 43 72 65 64 65 |trol-Allow-Crede|
|00000170| 6e 74 69 61 6c 73 3a 20 74 72 75 65 0d 0a 43 61 |ntials: true..Ca|
|00000180| 63 68 65 2d 43 6f 6e 74 72 6f 6c 3a 20 70 75 62 |che-Control: pub|
|00000190| 6c 69 63 2c 20 6d 61 78 2d 61 67 65 3d 31 34 34 |lic, max-age=144|
|000001a0| 30 30 0d 0a 50 72 61 67 6d 61 3a 20 6e 6f 2d 63 |00..Pragma: no-c|
|000001b0| 61 63 68 65 0d 0a 45 78 70 69 72 65 73 3a 20 54 |ache..Expires: T|
|000001c0| 75 65 2c 20 32 35 20 4a 75 6e 20 32 30 31 39 20 |ue, 25 Jun 2019 |
|000001d0| 32 33 3a 31 31 3a 30 39 20 47 4d 54 0d 0a 58 2d |23:11:09 GMT..X-|
|000001e0| 43 6f 6e 74 65 6e 74 2d 54 79 70 65 2d 4f 70 74 |Content-Type-Opt|
|000001f0| 69 6f 6e 73 3a 20 6e 6f 73 6e 69 66 66 0d 0a 45 |ions: nosniff..E|
|00000200| 74 61 67 3a 20 57 2f 22 35 33 2d 68 66 45 6e 75 |tag: W/"53-hfEnu|
|00000210| 6d 65 4e 68 36 59 69 72 66 6a 79 6a 61 75 6a 63 |meNh6Yirfjyjaujc|
|00000220| 4f 50 50 54 2b 73 22 0d 0a 56 69 61 3a 20 31 2e |OPPT+s"..Via: 1.|
|00000230| 31 20 76 65 67 75 72 0d 0a 43 46 2d 43 61 63 68 |1 vegur..CF-Cach|
|00000240| 65 2d 53 74 61 74 75 73 3a 20 4d 49 53 53 0d 0a |e-Status: MISS..|
|00000250| 41 63 63 65 70 74 2d 52 61 6e 67 65 73 3a 20 62 |Accept-Ranges: b|
|00000260| 79 74 65 73 0d 0a 45 78 70 65 63 74 2d 43 54 3a |ytes..Expect-CT:|
|00000270| 20 6d 61 78 2d 61 67 65 3d 36 30 34 38 30 30 2c | max-age=604800,|
|00000280| 20 72 65 70 6f 72 74 2d 75 72 69 3d 22 68 74 74 | report-uri="htt|
|00000290| 70 73 3a 2f 2f 72 65 70 6f 72 74 2d 75 72 69 2e |ps://report-uri.|
|000002a0| 63 6c 6f 75 64 66 6c 61 72 65 2e 63 6f 6d 2f 63 |cloudflare.com/c|
|000002b0| 64 6e 2d 63 67 69 2f 62 65 61 63 6f 6e 2f 65 78 |dn-cgi/beacon/ex|
|000002c0| 70 65 63 74 2d 63 74 22 0d 0a 53 65 72 76 65 72 |pect-ct"..Server|
|000002d0| 3a 20 63 6c 6f 75 64 66 6c 61 72 65 0d 0a 43 46 |: cloudflare..CF|
|000002e0| 2d 52 41 59 3a 20 34 65 63 39 33 65 65 34 30 38 |-RAY: 4ec93ee408|
|000002f0| 36 35 63 32 65 66 2d 46 52 41 0d 0a 0d 0a 7b 0a |65c2ef-FRA....{.|
|00000300| 20 20 22 75 73 65 72 49 64 22 3a 20 31 2c 0a 20 |  "userId": 1,. |
|00000310| 20 22 69 64 22 3a 20 31 2c 0a 20 20 22 74 69 74 | "id": 1,.  "tit|
|00000320| 6c 65 22 3a 20 22 64 65 6c 65 63 74 75 73 20 61 |le": "delectus a|
|00000330| 75 74 20 61 75 74 65 6d 22 2c 0a 20 20 22 63 6f |ut autem",.  "co|
|00000340| 6d 70 6c 65 74 65 64 22 3a 20 66 61 6c 73 65 0a |mpleted": false.|
|00000350| 7d                                              |}               |
+--------+-------------------------------------------------+----------------+
2019-06-25 21:11:09.962 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.client.HttpClientOperations     : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Received response (auto-read:false) : [Date=Tue, 25 Jun 2019 19:11:09 GMT, Content-Type=application/json; charset=utf-8, Content-Length=83, Connection=keep-alive, Set-Cookie=__cfduid=d12e99e62dd085a04a5dbd06fa2cd933a1561489869; expires=Wed, 24-Jun-20 19:11:09 GMT; path=/; domain=.typicode.com; HttpOnly, X-Powered-By=Express, Vary=Origin, Accept-Encoding, Access-Control-Allow-Credentials=true, Cache-Control=public, max-age=14400, Pragma=no-cache, Expires=Tue, 25 Jun 2019 23:11:09 GMT, X-Content-Type-Options=nosniff, Etag=W/"53-hfEnumeNh6YirfjyjaujcOPPT+s", Via=1.1 vegur, CF-Cache-Status=MISS, Accept-Ranges=bytes, Expect-CT=max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct", Server=cloudflare, CF-RAY=4ec93ee40865c2ef-FRA]
2019-06-25 21:11:09.962 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/todos/1/?param=value, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [response_received])
2019-06-25 21:11:09.989 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.channel.FluxReceive        : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Subscribing inbound receiver [pending: 0, cancelled:false, inboundDone: false]
2019-06-25 21:11:09.994 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.client.HttpClientOperations     : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Received last HTTP packet
2019-06-25 21:11:10.003 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Decreasing pending responses, now 0
2019-06-25 21:11:10.004 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Last HTTP packet was sent, terminating the channel
2019-06-25 21:11:10.004 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.ReactorNetty               : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Removed handler: WriteTimeoutHandler, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:11:10.004 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/todos/1/?param=value, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [disconnecting])
2019-06-25 21:11:10.004 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Releasing channel
2019-06-25 21:11:10.005 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Channel cleaned, now 0 active connections and 1 inactive connections
2019-06-25 21:11:10.005 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ COMPLETE
2019-06-25 21:11:10.400 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Increasing pending responses, now 1
2019-06-25 21:11:10.401 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.http.server.HttpServer     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter@78143be2
2019-06-25 21:11:10.443 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Decreasing pending responses, now 0
2019-06-25 21:11:10.444 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] Last HTTP packet was sent, terminating the channel
2019-06-25 21:11:10.444 DEBUG 1520 --- [ctor-http-nio-3] r.n.channel.ChannelOperationsHandler     : [id: 0x647e0b5a, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3110] No ChannelOperation attached. Dropping: EmptyLastHttpContent
2019-06-25 21:16:16.330 DEBUG 1520 --- [ctor-http-nio-1] r.n.http.server.HttpServerOperations     : [id: 0x87bf3a9f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3214] New http connection, requesting read
2019-06-25 21:16:16.330 DEBUG 1520 --- [ctor-http-nio-4] r.n.http.server.HttpServerOperations     : [id: 0x1817c20f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3212] New http connection, requesting read
2019-06-25 21:16:16.330 DEBUG 1520 --- [ctor-http-nio-1] reactor.netty.channel.BootstrapHandlers  : [id: 0x87bf3a9f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3214] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:16:16.330 DEBUG 1520 --- [ctor-http-nio-4] reactor.netty.channel.BootstrapHandlers  : [id: 0x1817c20f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3212] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:16:16.335 DEBUG 1520 --- [ctor-http-nio-4] r.n.http.server.HttpServerOperations     : [id: 0x1817c20f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3212] Increasing pending responses, now 1
2019-06-25 21:16:16.336 DEBUG 1520 --- [ctor-http-nio-4] reactor.netty.http.server.HttpServer     : [id: 0x1817c20f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3212] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter@78143be2
2019-06-25 21:16:16.340 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Channel acquired, now 1 active connections and 0 inactive connections
2019-06-25 21:16:16.341 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.ReactorNetty               : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Added decoder [WriteTimeoutHandler] at the end of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.loggingHandler, reactor.left.httpCodec, WriteTimeoutHandler, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
2019-06-25 21:16:16.341 DEBUG 1520 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect    : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Handler is being applied: {uri=https://jsonplaceholder.typicode.com/todos/1/?param=value, method=GET}
2019-06-25 21:16:16.341 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] WRITE: 127B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 47 45 54 20 2f 74 6f 64 6f 73 2f 31 2f 3f 70 61 |GET /todos/1/?pa|
|00000010| 72 61 6d 3d 76 61 6c 75 65 20 48 54 54 50 2f 31 |ram=value HTTP/1|
|00000020| 2e 31 0d 0a 75 73 65 72 2d 61 67 65 6e 74 3a 20 |.1..user-agent: |
|00000030| 52 65 61 63 74 6f 72 4e 65 74 74 79 2f 30 2e 38 |ReactorNetty/0.8|
|00000040| 2e 39 2e 52 45 4c 45 41 53 45 0d 0a 68 6f 73 74 |.9.RELEASE..host|
|00000050| 3a 20 6a 73 6f 6e 70 6c 61 63 65 68 6f 6c 64 65 |: jsonplaceholde|
|00000060| 72 2e 74 79 70 69 63 6f 64 65 2e 63 6f 6d 0d 0a |r.typicode.com..|
|00000070| 61 63 63 65 70 74 3a 20 2a 2f 2a 0d 0a 0d 0a    |accept: */*.... |
+--------+-------------------------------------------------+----------------+
2019-06-25 21:16:16.341 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] FLUSH
2019-06-25 21:16:16.342 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] WRITE: 0B
2019-06-25 21:16:16.342 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] FLUSH
2019-06-25 21:16:16.342 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/todos/1/?param=value, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [request_sent])
2019-06-25 21:16:16.391 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ: 848B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 48 54 54 50 2f 31 2e 31 20 32 30 30 20 4f 4b 0d |HTTP/1.1 200 OK.|
|00000010| 0a 44 61 74 65 3a 20 54 75 65 2c 20 32 35 20 4a |.Date: Tue, 25 J|
|00000020| 75 6e 20 32 30 31 39 20 31 39 3a 31 36 3a 31 36 |un 2019 19:16:16|
|00000030| 20 47 4d 54 0d 0a 43 6f 6e 74 65 6e 74 2d 54 79 | GMT..Content-Ty|
|00000040| 70 65 3a 20 61 70 70 6c 69 63 61 74 69 6f 6e 2f |pe: application/|
|00000050| 6a 73 6f 6e 3b 20 63 68 61 72 73 65 74 3d 75 74 |json; charset=ut|
|00000060| 66 2d 38 0d 0a 43 6f 6e 74 65 6e 74 2d 4c 65 6e |f-8..Content-Len|
|00000070| 67 74 68 3a 20 38 33 0d 0a 43 6f 6e 6e 65 63 74 |gth: 83..Connect|
|00000080| 69 6f 6e 3a 20 6b 65 65 70 2d 61 6c 69 76 65 0d |ion: keep-alive.|
|00000090| 0a 53 65 74 2d 43 6f 6f 6b 69 65 3a 20 5f 5f 63 |.Set-Cookie: __c|
|000000a0| 66 64 75 69 64 3d 64 38 37 65 63 61 33 64 35 34 |fduid=d87eca3d54|
|000000b0| 32 61 35 32 62 61 61 62 37 39 39 32 38 30 31 31 |2a52baab79928011|
|000000c0| 64 63 32 32 32 36 34 31 35 36 31 34 39 30 31 37 |dc22264156149017|
|000000d0| 36 3b 20 65 78 70 69 72 65 73 3d 57 65 64 2c 20 |6; expires=Wed, |
|000000e0| 32 34 2d 4a 75 6e 2d 32 30 20 31 39 3a 31 36 3a |24-Jun-20 19:16:|
|000000f0| 31 36 20 47 4d 54 3b 20 70 61 74 68 3d 2f 3b 20 |16 GMT; path=/; |
|00000100| 64 6f 6d 61 69 6e 3d 2e 74 79 70 69 63 6f 64 65 |domain=.typicode|
|00000110| 2e 63 6f 6d 3b 20 48 74 74 70 4f 6e 6c 79 0d 0a |.com; HttpOnly..|
|00000120| 58 2d 50 6f 77 65 72 65 64 2d 42 79 3a 20 45 78 |X-Powered-By: Ex|
|00000130| 70 72 65 73 73 0d 0a 56 61 72 79 3a 20 4f 72 69 |press..Vary: Ori|
|00000140| 67 69 6e 2c 20 41 63 63 65 70 74 2d 45 6e 63 6f |gin, Accept-Enco|
|00000150| 64 69 6e 67 0d 0a 41 63 63 65 73 73 2d 43 6f 6e |ding..Access-Con|
|00000160| 74 72 6f 6c 2d 41 6c 6c 6f 77 2d 43 72 65 64 65 |trol-Allow-Crede|
|00000170| 6e 74 69 61 6c 73 3a 20 74 72 75 65 0d 0a 43 61 |ntials: true..Ca|
|00000180| 63 68 65 2d 43 6f 6e 74 72 6f 6c 3a 20 70 75 62 |che-Control: pub|
|00000190| 6c 69 63 2c 20 6d 61 78 2d 61 67 65 3d 31 34 34 |lic, max-age=144|
|000001a0| 30 30 0d 0a 50 72 61 67 6d 61 3a 20 6e 6f 2d 63 |00..Pragma: no-c|
|000001b0| 61 63 68 65 0d 0a 45 78 70 69 72 65 73 3a 20 54 |ache..Expires: T|
|000001c0| 75 65 2c 20 32 35 20 4a 75 6e 20 32 30 31 39 20 |ue, 25 Jun 2019 |
|000001d0| 32 33 3a 31 36 3a 31 36 20 47 4d 54 0d 0a 58 2d |23:16:16 GMT..X-|
|000001e0| 43 6f 6e 74 65 6e 74 2d 54 79 70 65 2d 4f 70 74 |Content-Type-Opt|
|000001f0| 69 6f 6e 73 3a 20 6e 6f 73 6e 69 66 66 0d 0a 45 |ions: nosniff..E|
|00000200| 74 61 67 3a 20 57 2f 22 35 33 2d 68 66 45 6e 75 |tag: W/"53-hfEnu|
|00000210| 6d 65 4e 68 36 59 69 72 66 6a 79 6a 61 75 6a 63 |meNh6Yirfjyjaujc|
|00000220| 4f 50 50 54 2b 73 22 0d 0a 56 69 61 3a 20 31 2e |OPPT+s"..Via: 1.|
|00000230| 31 20 76 65 67 75 72 0d 0a 43 46 2d 43 61 63 68 |1 vegur..CF-Cach|
|00000240| 65 2d 53 74 61 74 75 73 3a 20 48 49 54 0d 0a 41 |e-Status: HIT..A|
|00000250| 63 63 65 70 74 2d 52 61 6e 67 65 73 3a 20 62 79 |ccept-Ranges: by|
|00000260| 74 65 73 0d 0a 45 78 70 65 63 74 2d 43 54 3a 20 |tes..Expect-CT: |
|00000270| 6d 61 78 2d 61 67 65 3d 36 30 34 38 30 30 2c 20 |max-age=604800, |
|00000280| 72 65 70 6f 72 74 2d 75 72 69 3d 22 68 74 74 70 |report-uri="http|
|00000290| 73 3a 2f 2f 72 65 70 6f 72 74 2d 75 72 69 2e 63 |s://report-uri.c|
|000002a0| 6c 6f 75 64 66 6c 61 72 65 2e 63 6f 6d 2f 63 64 |loudflare.com/cd|
|000002b0| 6e 2d 63 67 69 2f 62 65 61 63 6f 6e 2f 65 78 70 |n-cgi/beacon/exp|
|000002c0| 65 63 74 2d 63 74 22 0d 0a 53 65 72 76 65 72 3a |ect-ct"..Server:|
|000002d0| 20 63 6c 6f 75 64 66 6c 61 72 65 0d 0a 43 46 2d | cloudflare..CF-|
|000002e0| 52 41 59 3a 20 34 65 63 39 34 36 36 31 63 63 31 |RAY: 4ec94661cc1|
|000002f0| 66 63 32 65 66 2d 46 52 41 0d 0a 0d 0a 7b 0a 20 |fc2ef-FRA....{. |
|00000300| 20 22 75 73 65 72 49 64 22 3a 20 31 2c 0a 20 20 | "userId": 1,.  |
|00000310| 22 69 64 22 3a 20 31 2c 0a 20 20 22 74 69 74 6c |"id": 1,.  "titl|
|00000320| 65 22 3a 20 22 64 65 6c 65 63 74 75 73 20 61 75 |e": "delectus au|
|00000330| 74 20 61 75 74 65 6d 22 2c 0a 20 20 22 63 6f 6d |t autem",.  "com|
|00000340| 70 6c 65 74 65 64 22 3a 20 66 61 6c 73 65 0a 7d |pleted": false.}|
+--------+-------------------------------------------------+----------------+
2019-06-25 21:16:16.391 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.client.HttpClientOperations     : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Received response (auto-read:false) : [Date=Tue, 25 Jun 2019 19:16:16 GMT, Content-Type=application/json; charset=utf-8, Content-Length=83, Connection=keep-alive, Set-Cookie=__cfduid=d87eca3d542a52baab79928011dc222641561490176; expires=Wed, 24-Jun-20 19:16:16 GMT; path=/; domain=.typicode.com; HttpOnly, X-Powered-By=Express, Vary=Origin, Accept-Encoding, Access-Control-Allow-Credentials=true, Cache-Control=public, max-age=14400, Pragma=no-cache, Expires=Tue, 25 Jun 2019 23:16:16 GMT, X-Content-Type-Options=nosniff, Etag=W/"53-hfEnumeNh6YirfjyjaujcOPPT+s", Via=1.1 vegur, CF-Cache-Status=HIT, Accept-Ranges=bytes, Expect-CT=max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct", Server=cloudflare, CF-RAY=4ec94661cc1fc2ef-FRA]
2019-06-25 21:16:16.391 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/todos/1/?param=value, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [response_received])
2019-06-25 21:16:16.392 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.channel.FluxReceive        : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Subscribing inbound receiver [pending: 0, cancelled:false, inboundDone: false]
2019-06-25 21:16:16.392 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.client.HttpClientOperations     : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Received last HTTP packet
2019-06-25 21:16:16.397 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.ReactorNetty               : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Removed handler: WriteTimeoutHandler, pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:16:16.397 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/todos/1/?param=value, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [disconnecting])
2019-06-25 21:16:16.397 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Releasing channel
2019-06-25 21:16:16.397 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Channel cleaned, now 0 active connections and 1 inactive connections
2019-06-25 21:16:16.397 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ COMPLETE
2019-06-25 21:16:16.399 DEBUG 1520 --- [ctor-http-nio-4] r.n.http.server.HttpServerOperations     : [id: 0x1817c20f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3212] Decreasing pending responses, now 0
2019-06-25 21:16:16.400 DEBUG 1520 --- [ctor-http-nio-4] r.n.http.server.HttpServerOperations     : [id: 0x1817c20f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3212] Last HTTP response frame
2019-06-25 21:16:16.401 DEBUG 1520 --- [ctor-http-nio-4] r.n.http.server.HttpServerOperations     : [id: 0x1817c20f, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3212] Last HTTP packet was sent, terminating the channel
2019-06-25 21:21:41.162 DEBUG 1520 --- [ctor-http-nio-2] r.n.http.server.HttpServerOperations     : [id: 0x903d5efc, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3435] New http connection, requesting read
2019-06-25 21:21:41.163 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x99be7b9c, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3436] New http connection, requesting read
2019-06-25 21:21:41.163 DEBUG 1520 --- [ctor-http-nio-2] reactor.netty.channel.BootstrapHandlers  : [id: 0x903d5efc, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3435] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:21:41.163 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.channel.BootstrapHandlers  : [id: 0x99be7b9c, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3436] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0 = reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:21:41.171 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x99be7b9c, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3436] Increasing pending responses, now 1
2019-06-25 21:21:41.172 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.http.server.HttpServer     : [id: 0x99be7b9c, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3436] Handler is being applied: org.springframework.http.server.reactive.ReactorHttpHandlerAdapter@78143be2
2019-06-25 21:21:41.176 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Channel acquired, now 1 active connections and 0 inactive connections
2019-06-25 21:21:41.176 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.ReactorNetty               : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Added decoder [WriteTimeoutHandler] at the end of the user pipeline, full pipeline: [reactor.left.sslHandler, reactor.left.loggingHandler, reactor.left.httpCodec, WriteTimeoutHandler, reactor.right.reactiveBridge, DefaultChannelPipeline$TailContext#0]
2019-06-25 21:21:41.176 DEBUG 1520 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect    : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] Handler is being applied: {uri=https://jsonplaceholder.typicode.com/todos/1/?param=value, method=GET}
2019-06-25 21:21:41.177 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] WRITE: 127B
         +-------------------------------------------------+
         |  0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f |
+--------+-------------------------------------------------+----------------+
|00000000| 47 45 54 20 2f 74 6f 64 6f 73 2f 31 2f 3f 70 61 |GET /todos/1/?pa|
|00000010| 72 61 6d 3d 76 61 6c 75 65 20 48 54 54 50 2f 31 |ram=value HTTP/1|
|00000020| 2e 31 0d 0a 75 73 65 72 2d 61 67 65 6e 74 3a 20 |.1..user-agent: |
|00000030| 52 65 61 63 74 6f 72 4e 65 74 74 79 2f 30 2e 38 |ReactorNetty/0.8|
|00000040| 2e 39 2e 52 45 4c 45 41 53 45 0d 0a 68 6f 73 74 |.9.RELEASE..host|
|00000050| 3a 20 6a 73 6f 6e 70 6c 61 63 65 68 6f 6c 64 65 |: jsonplaceholde|
|00000060| 72 2e 74 79 70 69 63 6f 64 65 2e 63 6f 6d 0d 0a |r.typicode.com..|
|00000070| 61 63 63 65 70 74 3a 20 2a 2f 2a 0d 0a 0d 0a    |accept: */*.... |
+--------+-------------------------------------------------+----------------+
2019-06-25 21:21:41.177 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] FLUSH
2019-06-25 21:21:41.177 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] WRITE: 0B
2019-06-25 21:21:41.177 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] FLUSH
2019-06-25 21:21:41.178 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] onStateChange(GET{uri=/todos/1/?param=value, connection=PooledConnection{channel=[id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443]}}, [request_sent])
2019-06-25 21:22:02.738 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] READ COMPLETE
2019-06-25 21:22:02.746 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] EXCEPTION: java.io.IOException: An existing connection was forcibly closed by the remote host

java.io.IOException: An existing connection was forcibly closed by the remote host
	at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_181]
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_181]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_181]
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1125) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar:4.1.36.Final]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]

2019-06-25 21:22:02.750  WARN 1520 --- [ctor-http-nio-3] r.netty.http.client.HttpClientConnect    : [id: 0x635cb3c3, L:/192.168.0.3:3111 - R:jsonplaceholder.typicode.com/172.64.128.28:443] The connection observed an error

java.io.IOException: An existing connection was forcibly closed by the remote host
	at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_181]
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_181]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_181]
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1125) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-common-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.36.Final.jar:4.1.36.Final]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]

2019-06-25 21:22:02.767 ERROR 1520 --- [ctor-http-nio-3] a.w.r.e.AbstractErrorWebExceptionHandler : [99be7b9c] 500 Server Error for HTTP GET "/get"

java.io.IOException: An existing connection was forcibly closed by the remote host
	at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_181]
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_181]
	at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_181]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_181]
	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1125) ~[netty-buffer-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) ~[netty-common-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.36.Final.jar:4.1.36.Final]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_181]

2019-06-25 21:22:02.769 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x99be7b9c, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3436] Decreasing pending responses, now 0
2019-06-25 21:22:02.769 DEBUG 1520 --- [ctor-http-nio-3] r.n.http.server.HttpServerOperations     : [id: 0x99be7b9c, L:/0:0:0:0:0:0:0:1:8080 - R:/0:0:0:0:0:0:0:1:3436] Last HTTP packet was sent, terminating the channel
2019-06-25 21:22:02.770 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 ! R:jsonplaceholder.typicode.com/172.64.128.28:443] Channel cleaned, now 0 active connections and 1 inactive connections
2019-06-25 21:22:02.770 DEBUG 1520 --- [ctor-http-nio-3] r.n.resources.PooledConnectionProvider   : [id: 0x635cb3c3, L:/192.168.0.3:3111 ! R:jsonplaceholder.typicode.com/172.64.128.28:443] Channel closed, now 0 active connections and 0 inactive connections
2019-06-25 21:22:02.770 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.ReactorNetty               : [id: 0x635cb3c3, L:/192.168.0.3:3111 ! R:jsonplaceholder.typicode.com/172.64.128.28:443] Non Removed handler: WriteTimeoutHandler, context: ChannelHandlerContext(WriteTimeoutHandler, [id: 0x635cb3c3, L:/192.168.0.3:3111 ! R:jsonplaceholder.typicode.com/172.64.128.28:443]), pipeline: DefaultChannelPipeline{(reactor.left.sslHandler = io.netty.handler.ssl.SslHandler), (reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (WriteTimeoutHandler = io.netty.handler.timeout.WriteTimeoutHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-06-25 21:22:02.775 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 ! R:jsonplaceholder.typicode.com/172.64.128.28:443] USER_EVENT: SslCloseCompletionEvent(java.nio.channels.ClosedChannelException)
2019-06-25 21:22:02.775 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 ! R:jsonplaceholder.typicode.com/172.64.128.28:443] INACTIVE
2019-06-25 21:22:02.775 DEBUG 1520 --- [ctor-http-nio-3] reactor.netty.tcp.TcpClient              : [id: 0x635cb3c3, L:/192.168.0.3:3111 ! R:jsonplaceholder.typicode.com/172.64.128.28:443] UNREGISTERED

@violetagg
Copy link
Member

@martin-tarjanyi it seems the server closes the connection. Can you take a tcp dump then we will be able to see clearly what's happening.

@violetagg
Copy link
Member

violetagg commented Jul 2, 2019

@martin-tarjanyi Were you able to obtain a tcp dump in order to check whether the server closes the connection?

@Kirupakaran
Copy link

I'm facing the same exact issue. Tried reactor-netty v0.7.9 and v0.8.5 (spring webflux) but only in Windows and not in linux. I'll upload the logs soon. I've checked the logs and found that this happens when a old tcp connection is being used that was idle for some time. I'll upload the logs as soon as I can.

@violetagg
Copy link
Member

violetagg commented Jul 8, 2019

@Kirupakaran Try to use Reactor Netty latest releases (for 0.8.x it is 0.8.9.RELEASE)
Also obtain a tcp dump in order to check what's going on.

@violetagg violetagg removed the type/bug A general bug label Jul 8, 2019
@violetagg violetagg removed the for/user-attention This issue needs user attention (feedback, rework, etc...) label Jul 17, 2019
@violetagg
Copy link
Member

Closing the issue as there is no author response.
If there is more information i.e. tcp dump we can reopen it again

@Kirupakaran
Copy link

@violetagg I'm sorry but my company prevented me from providing the tcp dump as it is a thirdparty service. If I can replicate it, I'll open a new issue with tcp dump.

@violetagg violetagg added the for/user-attention This issue needs user attention (feedback, rework, etc...) label Jul 26, 2019
@hdineth
Copy link

hdineth commented Mar 18, 2021

@violetagg I got the same An existing connection was forcibly closed by the remote host warning after first request completes and after 5 minutes of idle time of my application.

My full error is as below,

`2021-03-18 11:27:22.458 - - WARN [-,,,] 15296 --- [ctor-http-nio-6] i.n.c.AbstractChannelHandlerContext : An exception 'java.lang.NullPointerException' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:

java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:247)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1147)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)`

I have configured my Webclient as below,

` @bean
public WebClient webClient(WebClient.Builder webClientBuilder) {

ConnectionProvider provider = ConnectionProvider
    .fixed("customFixedConnectionPool", maxConnections, acquireTimeout);

HttpClient httpClient = HttpClient.create(provider).metrics(true);

return webClientBuilder
    .clientConnector(new ReactorClientHttpConnector(httpClient))
    .defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
    .build();

}`

But when I remove the .metrics(true) from the http client. That warning goes away.

@violetagg
Copy link
Member

@hdineth Open a new issue and if possible switch to DEBUG.
'java.lang.NullPointerException' [enable DEBUG level for full stacktrace]

@hdineth
Copy link

hdineth commented Mar 18, 2021

Hi @violetagg I'm not sure whether this has been fixed in latest RELEASE. My current version is 0.9.0.RELEASE. Find below DEBUG logs.

2021-03-18 13:56:46.332 -  - DEBUG [-,,,] 2604 --- [ctor-http-nio-6] i.n.c.AbstractChannelHandlerContext      : An exception java.lang.NullPointerException
	at reactor.netty.http.client.HttpClientMetricsHandler.exceptionCaught(HttpClientMetricsHandler.java:124)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:268)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireExceptionCaught(CombinedChannelDuplexHandler.java:426)
	at io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:92)
	at io.netty.channel.CombinedChannelDuplexHandler$1.fireExceptionCaught(CombinedChannelDuplexHandler.java:147)
	at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:143)
	at io.netty.channel.CombinedChannelDuplexHandler.exceptionCaught(CombinedChannelDuplexHandler.java:233)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:268)
	at reactor.netty.channel.ChannelMetricsHandler.exceptionCaught(ChannelMetricsHandler.java:129)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:268)
	at io.netty.handler.ssl.SslHandler.exceptionCaught(SslHandler.java:1095)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:268)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1389)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:297)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:276)
	at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:745)
was thrown by a user handler's exceptionCaught() method while handling the following exception:

java.io.IOException: An existing connection was forcibly closed by the remote host
	at sun.nio.ch.SocketDispatcher.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
	at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:247)
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1147)
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:347)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:745)

@violetagg
Copy link
Member

violetagg commented Mar 18, 2021

@hdineth Yep this NPE seems to be fixed: #896

@v-saurabhsingh
Copy link

v-saurabhsingh commented Sep 20, 2021

I am getting this if I shut down connected reactor netty client

reactor.core.Exceptions$ErrorCallbackNotImplemented: java.io.IOException: An existing connection was forcibly closed by the remote host
Caused by: java.io.IOException: An existing connection was forcibly closed by the remote host
	at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:na]
	at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:na]
	at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276) ~[na:na]
	at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:233) ~[na:na]
	at java.base/sun.nio.ch.IOUtil.read(IOUtil.java:223) ~[na:na]
	at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:358) ~[na:na]
	at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-all-4.1.68.Final.jar:4.1.68.Final]
	at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2021-09-20 21:44:17.521  INFO 6368 --- [actor-tcp-nio-2] org.reactor.tcp.Server                   : out doOnCancel

@rmorgan @violetagg

@v-saurabhsingh
Copy link

v-saurabhsingh commented Sep 20, 2021

I am getting this if I shut down connected client

*************************************** Server Implementation *******************************************

TcpServer tcpServer = TcpServer.create()
				.doOnBind(tcpServerConfig-> log.info("doOnBind {}",tcpServerConfig))
				.doOnBound(disposableServer-> log.info("doOnBound {}",disposableServer))
				.doOnUnbound(disposableServer-> log.info("doOnUnbound {}",disposableServer))
				.doOnConnection(connection-> log.info("doOnConnection {}",connection))
				.doOnChannelInit((connectionObserver,channel,remoteAddress)-> log.info("doOnConnection {}, {}, {}",connectionObserver,channel,remoteAddress))
				.host("127.0.0.1")
				.port(8089)
				.wiretap(true)
				//.metrics(true)
				.noSSL();
		DisposableServer server = tcpServer.handle((in,out)-> {
			//return inbound.receive().asString().log().then();
			in.receive().asString()
					.doOnError(ex->log.info("in doOnError: {}",ex.getMessage()))
					.doOnCancel(()->log.info("in doOnCancel"))
					.subscribe(i->log.info("in RECEIVED: {}",i));
			return out.neverComplete()
					.doOnTerminate(()->log.info("out doOnTerminate"))
					.doOnError(ex->log.info("out doOnError: {}",ex.getMessage()))
					.doOnCancel(()->log.info("out doOnCancel"));
		}).bindNow();
		server.onDispose()
				.block();

*************************************** Client Implementation *******************************************

	TcpClient client = TcpClient.create()
				.doOnConnect(connection-> log.info("doOnConnect {}",connection))
				.doOnConnected(connection-> log.info("doOnConnected {}",connection))
				.doOnDisconnected(connection-> log.info("doOnDisconnected {}",connection))
				.doOnChannelInit((connectionObserver,channel,remoteAddress)-> log.info("doOnConnection {}, {}, {}",connectionObserver,channel,remoteAddress))
				.host("127.0.0.1")
				.port(8089)
				.wiretap(true)
				//.metrics(true)
				.noSSL();
		Connection connection = client.handle((in, out) -> {
			//return out.send(Flux.concat(ByteBufFlux.fromString(Mono.just("echo")), in.receive().retain()))
			in.receive().asString().subscribe(i->{
				log.info("RECEIVED: {}",i);
			});
			return out.sendString(Flux.interval(Duration.ofMillis(100)).map(l -> {
				final String msg= Long.toString(l);
				log.info("SEND: {}",msg);
				return msg;
			}));
		}).connectNow();
		connection.onDispose()
				.block();

@violetagg

@0835030377
Copy link

Hi Guys

I am also struggling with this , and I read up the comments but I could not see the solution . Did anyone managed to explicitly close the webclient connection before its auto forced by remote server?

@ryanrupp
Copy link
Contributor

ryanrupp commented Jan 12, 2022

Hey all, just dropping some details about my experience with this. There's a handful of issues that relate to this issue that I dug up over time in the reactor-netty repo here, I think I will try to go through them and link my comment for visibility.

Issue Details

Specifically, the problem I was running into was the application (which uses reactor-netty) was being routed through an Azure load balancer. The Azure load balancer has some questionable defaults where it:

  1. Sets the max idle time to 4 minutes (alright that's fine)
  2. By default does not send a TCP RST (close notification) when the idle time is hit

My main issue is with #2. You can read more about it here in Azure where they talk about enabling this setting here but it seems this is not the default (legacy reasons? this feature wasn't available until about 2018 it seems).

When this flag is disabled the behavior is as follows:

  1. Connection idle for 4 minutes and Azure closes it. It does not notify the other sides of the connection so the client application (using netty-reactor) is unware that the close took place
  2. On the next request Reactor Netty tries to use this stale connection. In our case what happened at this point is we can see a bunch of TCP retransmissions (I think Azure is blackholing the connection rather than sending an immediate close which is also problematic). This would result in the connection to hang for 5-20 seconds before giving up and throwing an exception (also closing/discarding the connection on the client side).

When this flag is enabled the behavior is as follows:

  1. Connection idle for 4 minutes and Azure closes it. Both sides of the connection are notified of the close. Reactor-netty sees this close and presumably discards the connection from the pool
  2. On the next request Reactor Netty asks for a connection from the pool. At this point either there's a pre-existing live connection in the pool or just a new one is created. Either way, it operates as expected without issue.

So really for us the problematic part was Azure never sending a close notification so the client never knew the connection was stale (also the blackhole behavior rather than just closing immediately wasn't great, we had application level retries that covered this but it produced a delay then in API calls).

Resolution
For our version of this issue it could be resolved in a couple of different ways but with caveats:

Use Reactor-Netty connection max idle time
You can configure a max idle time that is lower than the remote network appliances lowest max idle time. Here's some common ones we've seen:

  • Azure load balancer default idle timeout (configurable) = 4 minutes
  • AWS application load balancer default timeout (configurable) = 1 minute (we've typically seen this increased though)
  • AWS network load balancer default timeout (not configurable) = 350 seconds
  • AWS Global Accelerator default timeout (not configurable) = 340 seconds

You can configure the Reactor Netty ConnectionProvider with the desired max idle time (adapt in the other settings you want) i.e.

ConnectionProvider connectionProvider = ConnectionProvider.builder("my-http-client")
                .maxIdleTime(maxIdleTime)
                // rest of settings
                .build();

Caveats
As you can see the idle timeout varies depending on provider and this is just listing common cloud infrastructure. So it's hard to have a good default for this without just making it overly aggressive i.e. setting it < 60 seconds. Setting it to be aggressive means you're establishing new connections potentially more than needed i.e. more SSL handshakes etc. See next section about using TCP keep-alive

Use TCP keep-alive + idle time options
Alternatively (or additionally) you can use TCP keep-alive and set TCP_KEEPIDLE to something aggressive enough that a TCP keep-alive message will be sent before the idle timeout of your infrastructure occurs. I.e. in the Azure case if we broadcasted a keep-alive at some time of < 4 minutes of idle time this would prevent Azure from closing the connection. There is an example of these settings in the Reactor Netty documentation here. Note it may make sense to callout in the documentation that this isn't supported on Windows (beyond just needing Java 11) - I can submit a PR for this (edit see #1981).

Caveats
TCP keep-alive is only supported on:

  1. Java 11+ (technically I think this got backported newer versions of Java 8 but this shouldn't be relied on)
  2. Only supported on OSX or Linux NOT Windows

For more information see the JDK change that added this here. If you look at the resolution this was only added for OSX and Linux. It is still not supported for Windows as of Java 17.

You could enable TCP keep-alive generically (which is supported on all platforms even in Java 8) and then configure the TCP_KEEPIDLE (idle time to issue keep-alive message) at the operating system level but this is cumbersome/error-prone. The OS default is typically going to be 2 hours of idle time before TCP keep-alive messages start which is not going to be aggressive enough to prevent infrastructure like the Azure load balancer example from severing the connection.

Produce application level keep-alive messages
If you have a dummy endpoint you could just hit this periodically but you would have to do it for all connections in the pool. I didn't really explore this much but seems a bit like re-inventing the wheel i.e. reproducing the concept of TCP keep-alive.

What we did

We ended up just using the max idle setting at the Reactor-Netty pool level and defaulting it to < 4 minutes (220 seconds) to cover our most common lowest denominator (Azure load balancers configured this way). Again, note if the remote side actually sends a close notification this generally is not a problem. However, there is probably some rarer race condition if the remote side hits the idle timeout right as the client side tries to use the connection. You can find some issue descriptions like this, this requires more exact timing though to reproduce/would be rarer although probably good to prevent by tossing these connections out pre-emptively.

If possible though I would also suggest changing the Azure load balancer setting to enable the TCP RST behavior so proper close notifications come through. In my case this was not infrastructure I controlled and would have been larger in scope/higher risk to change that setting.

@miguellgramacho96
Copy link

miguellgramacho96 commented Mar 1, 2023

I'm a developer facing this issue in 2023. I can see how load balancers not sending RST packets can create an unpredictable behaviour. At this date Azure does present a setting to enable sending RST packets, but one would still have to rely on using keep alive probes to extend/reset the load balancer max idle time. But I'm wondering if netty could close and open a new channel instead of closing and returning back an exception... It does look like components that act as passthroughs do not send RST packets and I would argue that 99.9% of the developers that use netty are not aware that they could end up with this sort of unpredictable behaviour in the future if they plugin a load balancer. It would be a more resilient behaviour from netty if it could close and open a new connection in this specific case.

@pderop
Copy link
Contributor

pderop commented Mar 1, 2023

@miguellgramacho96 , could you clarify if you suggest that the connection pool should automatically reopen a connection that is closed on idle timeout, or if you suggest to reopen a new connection and retry to send a request that could not be sent over a stale connection ?

in the latter case, I think that the reactor-netty client already automatically retries the request with either a new connection or with an existing available one from the pool (but only if the request headers have not been sent). Note also that, as @ryanrupp said, the connection will hang for some time (like "5-20 seconds") before giving up on a stale connection, so the request will be retried but we can't avoid the delay.

maybe @violetagg will want to add comments on this ?

@miguellgramacho96
Copy link

@pderop Hi, thanks for replying. And sorry for my late reply.

In the meantime I have set the max idle time lower than the AWS Network Load Balancer to solve the issue.

I was facing the following issue org.springframework.web.reactive.function.client.WebClientRequestException: readAddress(..) failed: Connection reset by peer; nested exception is io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: Connection reset by peer.

Faced this issue every time a new request triggered the reuse of a channel that stayed idle for more than the aws nlb max idle time, as far as I understood. So yes, that led me to think that reactor-netty was not being able to reuse the connection somehow, but I'm also not aware of the source code of reactor-netty, like the case you brought up of reusing an existing channel from the pool only if the request headers have not been sent...

@fsilvaortiz
Copy link

We have the same issue with the Azure load balancer when using WebClient. However, our challenge is that even when we select a short max idle time using ConnectionProvider, the pool always retains one idle connection. We observed this behavior through the metrics information.

@violetagg
Copy link
Member

@fsilvaortiz This issue is closed. Please open a new issue for your use case and provide detailed description and how the issue can be reproduced.

@ryanrupp
Copy link
Contributor

ryanrupp commented Oct 2, 2024

From my previous comment here

TCP keep-alive is only supported on:

  1. Java 11+ (technically I think this got backported newer versions of Java 8 but this shouldn't be relied on)
  2. Only supported on OSX or Linux NOT Windows

Re: #2 Windows support, looks like keep alive extended options (such as the ability to set the idle time before probe - TCP_KEEPIDLE - to something more aggressive than the OS default which is generally not sufficient), appears to be available as of Java 22 (see JDK-8308593 ) and then backported back to the "LTS" versions (17 and 21, see links off that main issue, edit - read this wrong and thought 11 was backported too but does not seem to be the case, see here). So, if either:

  1. You're using >= Java 11 (or sufficient backported patch version of 8) + Mac/Linux
  2. OR, you're using Windows (release note here mentions >= Windows 10) but can control the Java version (17/21 with the corresponding patch versions or >= 22)

Then tuning TCP keep at the client level may be the preferred route. The commit from here can be adjusted now to mention this (will try to submit a PR when I get a chance).

Thinking about this again because I'm trying to resolve an intermittent PrematureCloseException that still is lingering after tuning the max idle connection time down to 3 minutes which is really just a guess at an acceptable idle time but the remote side is unknown and could be more aggressive so TCP keep alive is probably preferred if available when you don't control the network flow (with something aggressive enough like a 30s TCP_KEEPIDLE, MSSQL JDBC driver for instance is doing this here).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
for/user-attention This issue needs user attention (feedback, rework, etc...)
Projects
None yet
Development

No branches or pull requests

10 participants