Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT.test fails on Test Index and Search locale dependent mappings / dates #49719

Closed
dliappis opened this issue Nov 29, 2019 · 15 comments
Assignees
Labels
:Core/Infra/Core Core issues without another label Team:Core/Infra Meta label for core/infra team >test-failure Triaged test failures from CI

Comments

@dliappis
Copy link
Contributor

Since #48703 we are seeing failures like: https://gradle-enterprise.elastic.co/s/m64s5k5k3pbn6/tests/qiq773hxdcg7e-2pri752okzdby

Failure:

expected [2xx] status code but api [indices.create] returned [500 Internal Server Error] [{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[v6.8.3-2][127.0.0.1:36768][indices:admin/create]","stack_trace":"[[v6.8.3-2][127.0.0.1:36768][indices:admin/create]]; nested: RemoteTransportException[[v6.8.3-2][127.0.0.1:36768][indices:admin/create]]; nested: IllegalStateException[DocumentMapper serialization result is different from source. \n--> Source [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"88E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]\n--> Result [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"888E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]];\n\tat org.elasticsearch.ElasticsearchException.guessRootCauses(ElasticsearchException.java:644)\n\tat org.elasticsearch.ElasticsearchException.generateFailureXContent(ElasticsearchException.java:572)\n\tat org.elasticsearch.rest.BytesRestResponse.build(BytesRestResponse.java:138)\n\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:96)\n\tat org.elasticsearch.rest.BytesRestResponse.<init>(BytesRestResponse.java:91)\n\tat org.elasticsearch.rest.action.RestActionListener.onFailure(RestActionListener.java:58)\n\tat org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:79)\n\tat org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$1.handleException(TransportMasterNodeAction.java:190)\n\tat org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1119)\n\tat org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:243)\n\tat org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225)\n\tat org.elasticsearch.transport.InboundHandler.handleException(InboundHandler.java:241)\n\tat org.elasticsearch.transport.InboundHandler.handlerResponseError(InboundHandler.java:233)\n\tat org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:136)\n\tat org.elasticsearch.transport.InboundHandler.inboundMessage(InboundHandler.java:102)\n\tat org.elasticsearch.transport.TcpTransport.inboundMessage(TcpTransport.java:673)\n\tat org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:62)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)\n\tat io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:326)\n\tat io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)\n\tat io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)\n\tat io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)\n\tat io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)\n\tat io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)\n\tat io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)\n\tat io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:600)\n\tat io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:554)\n\tat io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)\n\tat io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)\n\tat io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: RemoteTransportException[[v6.8.3-2][127.0.0.1:36768][indices:admin/create]]; nested: IllegalStateException[DocumentMapper serialization result is different from source. \n--> Source [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"88E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]\n--> Result [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"888E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]];\nCaused by: java.lang.IllegalStateException: DocumentMapper serialization result is different from source. \n--> Source [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"88E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]\n--> Result [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"888E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]\n\tat org.elasticsearch.index.mapper.MapperService.assertSerialization(MapperService.java:617)\n\tat java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90)\n\tat java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1812)\n\tat java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)\n\tat java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:499)\n\tat java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:486)\n\tat java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)\n\tat java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230)\n\tat java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196)\n\tat java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)\n\tat java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454)\n\tat org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:592)\n\tat org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403)\n\tat org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:330)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$IndexCreationTask.execute(MetaDataCreateIndexService.java:481)\n\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\n\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643)\n\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270)\n\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200)\n\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135)\n\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\n\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"}],"type":"illegal_state_exception","reason":"DocumentMapper serialization result is different from source. \n--> Source [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"88E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]\n--> Result [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"888E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]","stack_trace":"java.lang.IllegalStateException: DocumentMapper serialization result is different from source. \n--> Source [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"88E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]\n--> Result [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"888E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]\n\tat org.elasticsearch.index.mapper.MapperService.assertSerialization(MapperService.java:617)\n\tat java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90)\n\tat java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1812)\n\tat java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126)\n\tat java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:499)\n\tat java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:486)\n\tat java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)\n\tat java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230)\n\tat java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196)\n\tat java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)\n\tat java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454)\n\tat org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:592)\n\tat org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:403)\n\tat org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:330)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$IndexCreationTask.execute(MetaDataCreateIndexService.java:481)\n\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\n\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:643)\n\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:270)\n\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:200)\n\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:135)\n\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\n\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"},"status":500}]Open stacktrace
[2019-11-29T08:03:40,087][INFO ][o.e.b.MixedClusterClientYamlTestSuiteIT] [test] [p0=search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates] before test
11 29, 2019 8:03:40 午前 org.elasticsearch.client.RestClient logResponse
警告: request [PUT http://[::1]:34553/test_index?include_type_name=false&error_trace=true] returned 1 warnings: [299 Elasticsearch-7.6.0-SNAPSHOT-90e9d61f2b75aa2962685175ea1bd92b8bb7c223 "[types removal] Using include_type_name in create index requests is deprecated. The parameter will be removed in the next major version."]
[2019-11-29T08:03:40,108][INFO ][o.e.b.MixedClusterClientYamlTestSuiteIT] [test] Stash dump on test failure [{
  "stash" : {
    "body" : {
      "error" : {
        "root_cause" : [
          {
            "type" : "remote_transport_exception",
            "reason" : "[v6.8.3-2][127.0.0.1:36768][indices:admin/create]",
            "stack_trace" : "[[v6.8.3-2][127.0.0.1:36768][indices:admin/create]]; nested: RemoteTransportException[[v6.8.3-2][127.0.0.1:36768][indices:admin/create]]; nested: IllegalStateException[DocumentMapper serialization result is different from source. 
--> Source [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"88E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]
--> Result [{\"_doc\":{\"properties\":{\"date_field\":{\"type\":\"date\",\"format\":\"888E, d MMM uuuu HH:mm:ss Z\",\"locale\":\"de\"}}}}]];

So we see the difference here is an additional 8 in the date_field format.

@dliappis dliappis added :Core/Infra/Core Core issues without another label >test-failure Triaged test failures from CI labels Nov 29, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra (:Core/Infra/Core)

dliappis added a commit to dliappis/elasticsearch that referenced this issue Nov 29, 2019
@dliappis
Copy link
Contributor Author

For now raised #49721 to mute the test.

@jaymode
Copy link
Member

jaymode commented Dec 3, 2019

Fixed in #49724. @pgomulka note that GitHub will not automatically close an issue when a fix is merged to a branch other than master.

@jaymode jaymode closed this as completed Dec 3, 2019
@droberts195
Copy link
Contributor

#49724 skips the tests against versions older than 6.8.5, but I think more versions need to be skipped (or other fixes are required), as this same test caused a problem when 7.x was BWC tested against 7.1.0.

The failing build was https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+7.x+default-distro+bwc/BWC_VERSION=7.1.0,nodes=centos-7&&immutable/363/console

One of the nodes in the cluster died with an assertion failure while running search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates. The assertion failure stack trace is in v7.1.0-2/logs/v7.1.0.log in the server side logs:

[2019-12-06T03:05:48,389][DEBUG][o.e.c.c.PublicationTransportHandler] [v7.1.0-2] received diff cluster state version [7527] with uuid [HQ07hpYkQpuP-zai32NBew], diff size [396]
[2019-12-06T03:05:48,394][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [v7.1.0-2] fatal error in thread [elasticsearch[v7.1.0-2][clusterApplierService#updateTask][T#1]], exiting
java.lang.AssertionError: {_doc=org.elasticsearch.index.mapper.DocumentMapper@c732ec4}
    at org.elasticsearch.index.mapper.MapperService.assertMappingVersion(MapperService.java:264) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.index.mapper.MapperService.updateMapping(MapperService.java:218) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.index.IndexService.updateMapping(IndexService.java:551) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.updateIndices(IndicesClusterStateService.java:533) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:266) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$5(ClusterApplierService.java:478) ~[elasticsearch-7.1.0.jar:7.1.0]
    at java.lang.Iterable.forEach(Iterable.java:75) ~[?:1.8.0_231]
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:476) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:459) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:413) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:164) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.1.0.jar:7.1.0]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.1.0.jar:7.1.0]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_231]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_231]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_231]

The corresponding client side log (extracted from https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+7.x+default-distro+bwc/BWC_VERSION=7.1.0,nodes=centos-7&&immutable/363/consoleText) is:

  1> [2019-12-06T00:05:48,356][INFO ][o.e.b.MixedClusterClientYamlTestSuiteIT] [test] [p0=search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates] before test
  1> [2019-12-06T00:06:48,621][INFO ][o.e.b.MixedClusterClientYamlTestSuiteIT] [test] Stash dump on test failure [{
  1>   "stash" : {
  1>     "body" : {
  1>       "error" : {
  1>         "root_cause" : [ ],
  1>         "type" : "search_phase_execution_exception",
  1>         "reason" : "all shards failed",
  1>         "phase" : "query",
  1>         "grouped" : true,
  1>         "failed_shards" : [ ],
  1>         "stack_trace" : "Failed to execute phase [query], all shards failed
  1> 	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:296)
  1> 	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139)
  1> 	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:259)
  1> 	at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:105)
  1> 	at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:251)
  1> 	at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:172)
  1> 	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
  1> 	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41)
  1> 	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751)
  1> 	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
  1> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  1> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  1> 	at java.lang.Thread.run(Thread.java:748)
  1> "
  1>       },
  1>       "status" : 503
  1>     }
  1>   }
  1> }]

I could not reproduce locally using:

./gradlew ':qa:mixed-cluster:v7.1.0#mixedClusterTest' --tests "org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT.test {p0=search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates}" \
  -Dtests.seed=FFEEF395D279E9FB \
  -Dtests.security.manager=true \
  -Dtests.locale=lt \
  -Dtests.timezone=America/Rosario \
  -Dtests.distribution=default \
  -Dcompiler.java=12 \
  -Druntime.java=8

So the problem may well depend on whether the master node is 7.1 or 7.6 and whether some request is sent direct to master or via a non-master coordinating node.

@droberts195 droberts195 reopened this Dec 6, 2019
@pgomulka
Copy link
Contributor

pgomulka commented Dec 9, 2019

@droberts195 you are correct, thanks for noticing this
This is only fixed in master, 7.6 (I have a backport to 6.8 but I am struggling with merge)
I am not sure I should fix this in 7.x older versions?
Or maybe I should add a Features flag in master, 7.6 and 6.8 ?

@benwtrent
Copy link
Member

Failure occurred again: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+7.x+multijob-unix-compatibility/os=oraclelinux-6&&immutable/437/console

Mixed cluster between 7.5.1 and 7.6.0.

06:43:40 org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT > test {p0=search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates} FAILED
06:43:40     java.lang.AssertionError: Failure at [search/180_locale_dependent_mapping:27]: expected [2xx] status code but api [search] returned [503 Service Unavailable] [{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[],"stack_trace":"Failed to execute phase [query], all shards failed\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:534)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:305)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:563)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:384)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:219)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:284)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"},"status":503}]
06:43:40 
06:43:40         Caused by:
06:43:40         java.lang.AssertionError: expected [2xx] status code but api [search] returned [503 Service Unavailable] [{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[],"stack_trace":"Failed to execute phase [query], all shards failed\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:534)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:305)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:563)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:384)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:219)\n\tat org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:284)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:44)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:773)\n\tat org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"},"status":503}]

Reproduce line: ./gradlew ':qa:mixed-cluster:v7.5.1#mixedClusterTest' --tests "org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT.test {p0=search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates}" -Dtests.seed=72298F5B38DF5C62 -Dtests.security.manager=true -Dtests.locale=sr-RS -Dtests.timezone=Europe/Kaliningrad -Dcompiler.java=12 -Druntime.java=8

@pgomulka
Copy link
Contributor

@pgomulka
Copy link
Contributor

I have extended yml's skip section to allow ranges of versions to be skipped. #50028
will monitor tomorrow and close this issue

@pgomulka
Copy link
Contributor

pgomulka commented Dec 17, 2019

looks like the problem mentioned by @benwtrent is persisting. This test fails regularly once every day. It is always 503 Service Unavailable with reason "type":"search_phase_execution_exception","reason":"all shards failed"
I am unable to reproduce locally.
I suspect this is no longer related to date implementation

https://build-stats.elastic.co/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:'2019-12-13T09:20:15.573Z',mode:absolute,to:'2019-12-17T00:52:02.305Z'))&_a=(columns:!(_source),index:b646ed00-7efc-11e8-bf69-63c8ef516157,interval:auto,query:(language:lucene,query:'180_locale_dependent_mapping'),sort:!(process.time-start,desc))

@mark-vieira
Copy link
Contributor

This failure continues to happen on a daily basis. Should we consider muting?

@albertzaharovits
Copy link
Contributor

Another one, same test, this time error 400, in https://gradle-enterprise.elastic.co/s/kwjlcfspzwtxe:

org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT > test {p0=search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates} FAILED	
    java.lang.AssertionError: Failure at [search/180_locale_dependent_mapping:8]: expected [2xx] status code but api [indices.create] returned [400 Bad Request] [{"error":{"root_cause":[{"type":"resource_already_exists_exception","reason":"index [test_index/t3nF4f_FRDG0MZZ5bUuV1Q] already exists","index_uuid":"t3nF4f_FRDG0MZZ5bUuV1Q","index":"test_index","stack_trace":"[test_index/t3nF4f_FRDG0MZZ5bUuV1Q] ResourceAlreadyExistsException[index [test_index/t3nF4f_FRDG0MZZ5bUuV1Q] already exists]\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validateIndexName(MetaDataCreateIndexService.java:156)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreateIndexService.java:694)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.applyCreateIndexRequest(MetaDataCreateIndexService.java:270)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:245)\n\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\n\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702)\n\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324)\n\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219)\n\tat org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73)\n\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151)\n\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\n\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"}],"type":"resource_already_exists_exception","reason":"index [test_index/t3nF4f_FRDG0MZZ5bUuV1Q] already exists","index_uuid":"t3nF4f_FRDG0MZZ5bUuV1Q","index":"test_index","stack_trace":"[test_index/t3nF4f_FRDG0MZZ5bUuV1Q] ResourceAlreadyExistsException[index [test_index/t3nF4f_FRDG0MZZ5bUuV1Q] already exists]\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validateIndexName(MetaDataCreateIndexService.java:156)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreateIndexService.java:694)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.applyCreateIndexRequest(MetaDataCreateIndexService.java:270)\n\tat org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:245)\n\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\n\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702)\n\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324)\n\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219)\n\tat org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73)\n\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151)\n\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\n\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\n\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\n\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"},"status":400}]	
        at __randomizedtesting.SeedInfo.seed([121D0CD9F2B5143B:9A4933035C4979C3]:0)	
        at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:405)	
        at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:382)	
        at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)

If I see another one today, will mute.

@tvernum
Copy link
Contributor

tvernum commented Feb 10, 2020

Happened again

org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT > test {p0=search/180_locale_dependent_mapping/Test Index and Search locale dependent mappings / dates} FAILED
    java.lang.AssertionError: Failure at [search/180_locale_dependent_mapping:8]: expected [2xx] status code but api [indices.create] returned [400 Bad Request] [{"error":{"root_cause":[{"type":"resource_already_exists_exception","reason":"index [test_index/ppJ756vPSRmetfBWkmJG_g] already exists","index_uuid":"ppJ756vPSRmetfBWkmJG_g","index":"test_index","stack_trace":"[test_index/ppJ756vPSRmetfBWkmJG_g] ResourceAlreadyExistsException[index [test_index/ppJ756vPSRmetfBWkmJG_g] already exists]
    at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validateIndexName(MetaDataCreateIndexService.java:156)
    at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.validate(MetaDataCreateIndexService.java:694)
    at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService.applyCreateIndexRequest(MetaDataCreateIndexService.java:270)
    at org.elasticsearch.cluster.metadata.MetaDataCreateIndexService$1.execute(MetaDataCreateIndexService.java:245)
    at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)
...
        at __randomizedtesting.SeedInfo.seed([513E0F11137DA35C:D96A30CBBD81CEA4]:0)
        at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:405)
        at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:382)

Will mute.

@tvernum
Copy link
Contributor

tvernum commented Feb 10, 2020

From investigation in build-stats, it looks like there might be a couple of failure cases in this test. I don't know if they're the same root cause or not, but I'm going to mute this on both 7.x and 7.6 branches

tvernum added a commit to tvernum/elasticsearch that referenced this issue Feb 10, 2020
Muting this test as it has frequent failures.

See: elastic#49719
tvernum added a commit that referenced this issue Feb 10, 2020
Muting this test as it has frequent failures.

See: #49719
tvernum added a commit to tvernum/elasticsearch that referenced this issue Feb 10, 2020
Muting this test as it has frequent failures.

See: elastic#49719
Backport of: elastic#52116
tvernum added a commit that referenced this issue Feb 10, 2020
Muting this test as it has frequent failures.

Relates: #49719
Backport of: #52116
@rjernst rjernst added the Team:Core/Infra Meta label for core/infra team label May 4, 2020
@rjernst rjernst added the needs:triage Requires assignment of a team area label label Dec 3, 2020
@williamrandolph
Copy link
Contributor

I'm looking at this for the Core/Infra Team's annual issue triage. I believe we should keep this issue open and try to get around to fixing the test, since these tests are still muted in the 7.x branch. I'm not sure about the level of effort.

@williamrandolph williamrandolph removed the needs:triage Requires assignment of a team area label label Jan 7, 2021
@gwbrown
Copy link
Contributor

gwbrown commented Sep 8, 2021

This issue has had a long and complex history. It looks like there are a few different failures here - at least two I see are the original failures related to the date/time mapping, but there are also failures with "type":"resource_already_exists_exception","reason":"index [test_index/ppJ756vPSRmetfBWkmJG_g] already exists" - looks unrelated.

We don't appear to still have any muted tests on master due to this, so I'm going to close this issue. If we see continuing failures from this test, please do not reopen this issue, open a new one instead, though please do link to this issue. This issue has too long a history, to the point where it's an impediment to actually investigating these failures.

@gwbrown gwbrown closed this as completed Sep 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Core/Infra/Core Core issues without another label Team:Core/Infra Meta label for core/infra team >test-failure Triaged test failures from CI
Projects
None yet
Development

No branches or pull requests